Journal article Open Access

Orientation-dependent spatial memories for scenes viewed on mobile devices

Savvas Avraam; Adamantini Hatzipanayioti; Marios N. Avraamides


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="URL">https://zenodo.org/record/2668387</identifier>
  <creators>
    <creator>
      <creatorName>Savvas Avraam</creatorName>
      <affiliation>Department of Psychology, University of Cyprus, Nicosia, Cyprus and Silversky3D Virtual Reality Technologies Ltd, Nicosia, Cyprus</affiliation>
    </creator>
    <creator>
      <creatorName>Adamantini Hatzipanayioti</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-0579-4271</nameIdentifier>
      <affiliation>Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany</affiliation>
    </creator>
    <creator>
      <creatorName>Marios N. Avraamides</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-0049-8553</nameIdentifier>
      <affiliation>Department of Psychology, University of Cyprus, Nicosia, Cyprus and RISE Centre Nicosia, Nicosia, Cyprus</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Orientation-dependent spatial memories for scenes viewed on mobile devices</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2018</publicationYear>
  <subjects>
    <subject>spatial memories</subject>
    <subject>spatial cognition</subject>
    <subject>spatial representations</subject>
    <subject>orientation dependent</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2018-08-04</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="Text">Journal article</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/2668387</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1007/s00426-018-1069-5</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/rise-teaming-cyprus</relatedIdentifier>
  </relatedIdentifiers>
  <version>Accepted pre-print</version>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode">Creative Commons Attribution Non Commercial No Derivatives 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;We examined whether spatial representations for scenes experienced on the screens of mobile devices are orientation dependent&lt;br&gt;
and whether the type of movement (physical vs. simulated) during learning affects the encoding and the retrieval of&lt;br&gt;
spatial information. Participants studied a spatial layout depicted on a tablet and then carried out perspective-taking trials&lt;br&gt;
in which they localized objects from imagined perspectives. Depending on condition, participants either rotated the tablet&lt;br&gt;
along with their body or remained stationary and swiped with their finger on the screen to change their viewpoint within&lt;br&gt;
the scene. Results showed that participants were faster and more accurate to point to objects from an imagined perspective&lt;br&gt;
that was aligned than misaligned to their initial physical orientation during learning, suggesting that they had formed an&lt;br&gt;
orientation-dependent representation. Although no differences were found between movement conditions during pointing,&lt;br&gt;
participants were faster to encode spatial information with physical than simulated movement.&lt;/p&gt;</description>
    <description descriptionType="Other">This work has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement  No 739578 and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.

This is a pre-print of an article published in Psychological Research. The final authenticated version is available online at https://link.springer.com/article/10.1007/s00426-018-1069-5. © Springer-Verlag GmbH Germany, part of Springer Nature 2018</description>
    <description descriptionType="Other">{"references": ["Avraamides, M. N., &amp; Kelly, J. W. (2005). Imagined perspectivechanging within and across novel environments. In Spatial cognition 2004-lecture notes in artificial intelligence (pp. 245\u2013 258). Berlin-Heidelberg: Springer. https ://doi.org/10.1007/978- 3-540-32255 -9_15.", "Avraamides, M. N., &amp; Kelly, J. W. (2008). Multiple systems of spatial memory and action. Cognitive Processing, 9, 93\u2013106. https ://doi. org/10.1007/s1033 9-007-0188-5.", "Hatzipanayioti, A., Galati, A., &amp; Avraamides, M. N. (2015). The Protagonist's first perspective influences the encoding of spatial information in narratives. Quarterly Journal of Experimental Psychology, 69, 505\u2013520. https ://doi.org/10.1080/17470 218.2015.10561 94.", "Kelly, J. W., Avraamides, M. N., &amp; Loomis, J. M. (2007). Sensorimotor alignment effects in the learning Environment and in novel Environments. Journal of Experimental Psychology: Learning, Memory and Cognition, 33, 1092\u20131107. https ://doi. org/10.1037/0278-7393.33.6.1092.", "Loomis, J. M., Lippa, Y., Klatzky, R. L., &amp; Golledge, R. G. (2002). Spatial updating of locations specified by 3-d sound and spatial language. Journal of Experimental Psychology: Learning, Memory, &amp; Cognition, 28, 335\u2013345. https ://doi.org/10.1037/e5018 82009 -156.", "McNamara, T. P. (2003). How are the locations of objects in the environment represented in memory? In C. Freksa, W. Brauer, C. Habel &amp; K. F. Wender (Eds.), Spatial Cognition III: Routes and navigation, human memory and learning, spatial representation and spatial reasoning, LNAI 2685 (pp. 174\u2013191). Berlin: Springer. https ://doi.org/10.1007/3-540-45004 -1_11.", "Mou, W., Biocca, F., Owen, C. B., Tang, A., Xiao, F., &amp; Lim, L. (2004). Frames of reference in mobile augmented reality displays. Journal of Experimental Psychology: Applied, 10(4), 238\u2013244. https ://doi. org/10.1037/1076-898X.10.4.238.", "Mou, W., McNamara, T. P., Valiquette, C. M., &amp; Rump, B. (2004). Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology: Learning, Memory &amp; Cognition, 30, 142\u2013157. https ://doi.org/10.1037/0278-7393.30.1.142.", "Presson, C. C., &amp; Montello, D. R. (1994). Updating after rotational and translational body movements: Coordinate structure of perspective space. Perception, 23, 1447\u20131455. https ://doi.org/10.1068/ p2314 47.", "Rideout, V., &amp; Saphir, M. (2013). Zero to eight: Children's media use in America 2013. San Francisco: Common Sense Media.", "Rieser, J. J. (1989). Access to knowledge of spatial structure at novel points of observation. Journal of Experimental Psychology: Learning, Memory, &amp; Cognition, 15, 1157\u20131165. https ://doi. org/10.1037//0278-7393.15.6.1157.", "Rieser, J. J., Guth, D. A., &amp; Hill, E. W. (1986). Sensitivity to perspective structure while walking without vision. Perception, 15, 173\u2013188. https ://doi.org/10.1068/p1501 73.", "Shelton, A. L., &amp; McNamara, T. P. (1997). Multiple views of spatial memory. Psychonomic Bulletin &amp; Review, 4, 102\u2013106. https ://doi. org/10.3758/bf032 10780 ."]}</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/739578/">739578</awardNumber>
      <awardTitle>Research Center on Interactive Media, Smart System and Emerging Technologies</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
68
35
views
downloads
Views 68
Downloads 35
Data volume 37.6 MB
Unique views 62
Unique downloads 32

Share

Cite as