Conference paper Open Access

Attention-enhanced Sensorimotor Object Recognition

Thermos, S; Papadopoulos, GT; Daras, P; Potamianos, G


JSON Export

{
  "files": [
    {
      "links": {
        "self": "https://zenodo.org/api/files/8596fb13-9224-4131-b8dd-f951ee9006e2/06_CERTH_ICIP_2018.pdf"
      }, 
      "checksum": "md5:1dacb0ee42fe6a8eb8fdc71f9b4bc26f", 
      "bucket": "8596fb13-9224-4131-b8dd-f951ee9006e2", 
      "key": "06_CERTH_ICIP_2018.pdf", 
      "type": "pdf", 
      "size": 777718
    }
  ], 
  "owners": [
    93157
  ], 
  "doi": "10.1109/ICIP.2018.8451158", 
  "stats": {
    "version_unique_downloads": 95.0, 
    "unique_views": 21.0, 
    "views": 24.0, 
    "version_views": 24.0, 
    "unique_downloads": 95.0, 
    "version_unique_views": 21.0, 
    "volume": 74660928.0, 
    "version_downloads": 96.0, 
    "downloads": 96.0, 
    "version_volume": 74660928.0
  }, 
  "links": {
    "doi": "https://doi.org/10.1109/ICIP.2018.8451158", 
    "latest_html": "https://zenodo.org/record/3727849", 
    "bucket": "https://zenodo.org/api/files/8596fb13-9224-4131-b8dd-f951ee9006e2", 
    "badge": "https://zenodo.org/badge/doi/10.1109/ICIP.2018.8451158.svg", 
    "html": "https://zenodo.org/record/3727849", 
    "latest": "https://zenodo.org/api/records/3727849"
  }, 
  "created": "2020-03-26T17:12:33.186806+00:00", 
  "updated": "2020-04-17T12:27:35.067855+00:00", 
  "conceptrecid": "3727848", 
  "revision": 5, 
  "id": 3727849, 
  "metadata": {
    "access_right_category": "success", 
    "doi": "10.1109/ICIP.2018.8451158", 
    "description": "<p>Sensorimotor learning, namely the process of understanding the physical world by combining visual and motor information, has been recently investigated, achieving promising results for the task of 2D/3D object recognition. Following the recent trend in computer vision, powerful deep neural networks (NNs) have been used to model the &ldquo;sensory&rdquo; and &ldquo;motor&rdquo; information, namely the object appearance and affordance. However, the existing implementations cannot efficiently address the spatio-temporal nature of the humanobject interaction. Inspired by recent work on attention-based learning, this paper introduces an attention-enhanced NN-based model that learns to selectively focus on parts of the physical interaction where the object appearance is corrupted by occlusions and deformations. The model&rsquo;s attention mechanism relies on the confidence of classifying an object based solely on its appearance. Three metrics are used to measure the latter, namely the prediction entropy, the average N-best likelihood difference, and the N-best likelihood dispersion. Evaluation of the attention-enhanced model on the SOR3D dataset reports 33% and 26% relative improvement over the appearance-only and the spatio-temporal fusion baseline models, respectively.</p>", 
    "language": "eng", 
    "title": "Attention-enhanced Sensorimotor Object Recognition", 
    "license": {
      "id": "CC-BY-4.0"
    }, 
    "relations": {
      "version": [
        {
          "count": 1, 
          "index": 0, 
          "parent": {
            "pid_type": "recid", 
            "pid_value": "3727848"
          }, 
          "is_last": true, 
          "last_child": {
            "pid_type": "recid", 
            "pid_value": "3727849"
          }
        }
      ]
    }, 
    "communities": [
      {
        "id": "vrtogether-h2020"
      }
    ], 
    "version": "pre-print", 
    "keywords": [
      "Sensorimotor object recognition, attention mechanism, stream fusion, deep neural networks"
    ], 
    "publication_date": "2018-10-10", 
    "creators": [
      {
        "name": "Thermos, S"
      }, 
      {
        "name": "Papadopoulos, GT"
      }, 
      {
        "name": "Daras, P"
      }, 
      {
        "name": "Potamianos, G"
      }
    ], 
    "meeting": {
      "acronym": "IEEE ICIP 2018", 
      "dates": "2018 October 7-10"
    }, 
    "access_right": "open", 
    "resource_type": {
      "subtype": "conferencepaper", 
      "type": "publication", 
      "title": "Conference paper"
    }
  }
}
24
96
views
downloads
Views 24
Downloads 96
Data volume 74.7 MB
Unique views 21
Unique downloads 95

Share

Cite as