Planned intervention: On Wednesday April 3rd 05:30 UTC Zenodo will be unavailable for up to 2-10 minutes to perform a storage cluster upgrade.
Published November 20, 2018 | Version v1
Dataset Open

Data from Churan et al. 2018 Eye movements during path integration

  • 1. Marburg University

Description

Subjects

Six human subjects (two male and four female, mean age 23 years) took part in the experiment. The subjects had normal or corrected‐to‐normal vision and normal hearing.

Apparatus

Experiments were conducted in a darkened (but not completely dark) sound attenuated room. Subjects were seated at a distance of 114 cm from a tangential screen (70° x 55° visual angle) and their head‐position was stabilized by a chin‐rest. Visual stimuli were generated on a windows PC using an in‐house built stimulus package and were back‐projected onto the screen by a CRT‐Projector (Electrohome Marquee 8000) at a resolution of 1152 x 864 pixels and a frame rate of 100 Hz. The auditory stimuli were also generated using MATLAB and presented to the subjects by head‐phones (Philips SHS390). The eye position was recorded by a video‐based eye‐tracker (EyeLink II, SR Research) at a sampling rate of 500 Hz and an average accuracy of ~0.5°. During the distance reproduction, the subjects controlled the speed of simulated self‐motion using an analog joystick (Logitech ATK3) that was placed on a desk in front of them. The speed of the simulated self‐motion was proportional to the inclination angle of the joystick. The data from the joystick were acquired at a rate of 100 Hz and minimal change in speed of simulated self‐motion that could be triggered by the joystick was 1/1000 of the maximum range of speeds used in the experiments.

Stimuli

The visual stimulus consisted of a horizontal plane of white (luminance: 90 cd/m2) randomly placed small squares on a dark (<0.1 cd/m2) background that filled the lower half of the screen. The size of the squares was scaled between 0.2° and 1.9° in order to simulate depth. The direction of the simulated self‐motion was always straight‐ahead.The distances are always quantified in arbitrary units (AU) and the speed of simulated self‐motion in AU/s.The auditory stimuli were sinusoidal tones (SPL approximately 80 dB) with a frequency proportional to the simulated speed. The frequencies were in a range between 220 and 440 Hz and changed linearly as a function of the speed of the simulated self‐motion, which was in the range of 0–20 AU/s.

Procedure

Each trial consisted of two phases. During the “Encoding phase” the subjects were presented with a simulated self‐motion at one of the three speeds (8, 12 or 16 AU/s). The presentation lasted 4 seconds each which resulted in three different traveled distances (32, 48, 64 AU). The presentation was always bimodal, i.e., visual motion was accompanied by a sound representing the respective speed. The sound frequencies corresponding to the three speeds used during the Encoding phase were 308, 354, and 396 Hz, respectively. The task of the subjects in this phase was to monitor the distance covered for later reproduction. After the Encoding phase, a dark screen was presented for 500 msec and then the subjects had to reproduce the previously observed distance using a joystick. In different conditions of this “Reproduction phase,” either only the visual display was presented (visual condition) or only the auditory stimulus was presented while the screen was dark (auditory condition) or both sources of information were available at the same time (bimodal condition). During reproduction, the subjects were able to change the simulated speed by changing the inclination of the joystick. After the subjects had reached the distance they perceived to be identical to that during the Encoding phase, they had to press a joystick button to complete the trial. The subjects were allowed to move their eyes freely during the Encoding and the Reproduction phases. There were thus nine different experimental conditions: three different speeds in three different modalities. In each experimental condition, 80 trials were recorded. All conditions were presented in a pseudo‐randomized order and the subjects were not informed in advance about the sensory modality of the Reproduction phase.

Data

Eye position as well as the speed of the simulated self‐motion were recorded at a sampling rate of 500 Hz.
The file 'all_data.mat' is a MATLAB data file that consists of the cell structure 'all_data' has the elements 'pas' that contains data from the Encoding phase and 'akt' that contains data from the Reproduction phase.
The sub-structure 'pas' consists of 6x9x80 elements. The first dimension represents single subjects (6)
The second dimension represents the nine different conditions: 1. 8AU/s auditory 2. 8AU/s visual 3. 8AU/s bimodal 4. 12AU/s auditory 5. 12AU/s visual 6. 12AU/s bimodal 7. 16AU/s auditory 8. 16AU/s visual 9. 16AU/s bimodal
The third dimension represents the number of (80) trials recorded for each subject and condition.
Each element of 'pas' is a matrix consisting of two rows, the first giving the horizontal eye position and the second row giving the vertical eye position.  The sampling rate (columns) was 500 Hz. The simulated self-motion started at first sample and ended 4 sec (=2000 samples) later.

The sub-structure 'akt' has the same general shape. The only difference is that each element consists of three rows; horizontal eye-position, vertical eye-position, and (the actively chosen) speed of simulated self-motion.

Notes

This research was supported by: DFG - SFB/TRR 135/A2

Files

Files (125.5 MB)

Name Size Download all
md5:24e75401dd77e4a8857caa92efbb9535
125.5 MB Download

Additional details

Related works

Is supplement to
10.14814/phy2.13921 (DOI)