Dataset Open Access

# Data from Murdison et al. (2019): Saccade-induced changes in ocular torsion reveal predictive orientation perception

Murdison, T. Scott; Blohm, Gunnar; Bremmer, Frank

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Murdison, T. Scott</dc:creator>
<dc:creator>Blohm, Gunnar</dc:creator>
<dc:creator>Bremmer, Frank</dc:creator>
<dc:date>2019-09-01</dc:date>
<dc:description>Participants

Eight adults with normal or corrected to normal vision performed the experiment (five males, three females; age range 20–30 years). Participants were paid for their participation and were all naïve to the purpose of the experiment, and all had previous experience with psychophysical experiments involving video eye tracking. Each participant gave informed written consent prior to the experiment. All procedures used in this study conformed to the Declaration of Helsinki.

Materials

Stimuli were computer-generated using the Psychophysics Toolbox (Brainard, 1997) within MATLAB (MathWorks, Natick, MA), and were projected onto a large 120 cm (81°) × 90 cm (65.5°) flat screen by means of a DS+6K-M Christie projector (Christie Digital, Cypress, California) at a frame rate of 120 Hz and a resolution of 1152 × 864 pixels. Participants sat in complete darkness 70 cm away from the screen, and a table-mounted chin rest supported their heads. The complete darkness was required to prevent participants to perceive a compression of space, which might have confounded our data by causing all orientations being perceived closer to vertical than they were (Krekelberg, Kubischik, Hoffmann, &amp; Bremmer, 2003; Lappe, Awater, &amp; Krekelberg, 2000; Morrone, Ross, &amp; Burr, 1997). Eye movements were recorded using an infrared video-based Eyelink II (SR Research, Ottawa, Ontario) that was attached to the chin rest, providing a table-fixed head strap that kept each participant's head in a constant position throughout each experimental session. The screen was viewed binocularly, and eye position was sampled at 500 Hz. Prior to each block, participants performed a 13-point calibration sequence over a maximum eccentricity of 25°. The eye to which the perceptual stimulus was fovea-locked for each block was selected based on calibration performance. Drift correction was performed offline every 10 trials, based on a central fixation position. To ensure precise temporal measurement of trial start and stimulus presentation, we positioned a photosensitive diode over the lower left corner of the screen, where we flashed a white patch of pixels both at the start of each trial and at the presentation of the oriented bar stimulus (at the current on-screen gaze position of the participant). This part of the experimental apparatus was occluded from the view of the participant. After calibration for constant data acquisition delays, the photosensitive diode's voltage spikes provided reliable estimates of each trial's time-course (within a precision of approximately 2 ms).

Procedure

Participants also performed a fixation version of the same task in which they fixated one of six randomly selected locations (−20°, 0° or +20° horizontal along either the 0° or 20° screen meridian) and we flashed the identical stimulus at the fixation location for a single frame. After the stimulus flash participants responded with a key press indicating their perception of its orientation, identically to the first experiment. In all conditions (fixation, test, and control condition), oriented bar stimuli were presented for a single frame (8.3 ms).

There are two datasets uploaded here from two experiments (saccade task and fixation task) from the cited paper (Murdison et al. 2019, Journal of Vision: https://doi.org/10.1167/19.11.10). The saccade task data is labeled 'EyeMovementData.mat' (or *.json or*.csv) and the fixation task data is labeled 'FixationData.mat'. We will describe the data structure within each here.

EyeMovementData.mat

Within this .mat file, there are two types of data represented. The first dimensions of these datasets correspond to one another:

(1) time series data within 'subject_timeseries_data', encoded as a Matlab structure. This includes 2D (X, Y) eye tracking data from every trial, with position, velocity and acceleration for both the entire trials (i.e., 'eyeX', 'eyeY', 'eyeXv', 'eyeYv', 'eyeXa', 'eyeYa') and for the isolated primary saccade from each trial (i.e., 'saccX', 'saccY', etc.). Also included within this dataset is the presented oriented bar stimulus' 2D presentation vertices for each trial (which was gaze-contingent; i.e., 'barXpos' and 'barYpos').

(2) extracted trial data within 'subject_trial_data', encoded as a Matlab table. This includes several trial-by-trial parameters that allows the user to reconstruct each trial's timing and position data. The table headings describe both the variable definition and its units. For most variables, these units are in degrees of visual angle (deg), degrees per second (degPerS), degrees per second squared (degPerSSq), or time (s). A few entries are categorical; for example 'isBadTrial' represents the true/false (1 or 0) trials marked as 'bad' according to the exclusion criteria described within the report. Also 'subjectResponse_1L2R3Err' represents the participant response as to the direction of the perceived stimulus orientation (left = 1, right = 2, or err = 3 for erroneous entries). It is this table that has also been exported as an encoded JSON file and also a CSV file.

FixationData.mat

'subject_data' is a Matlab structure containing the entire dataset from the fixation task described within the paper. Each of this structure's 8 elements represents a different subject (see 'subID' field), and contains several other data fields, including per trial target tilt ('tilt'), target center X and Y positions ('tarX' and 'tarY'), participant response ('resp', encoded as -1 for negative tilt percepts and +1 for positive tilt percepts), and target conditions, including false torsion predictions for each ('cond' and 'ft_hat_tab'). Conditions are numbered from 1 to 6, for the left to right, on-axis, then oblique eye positions (e.g., such that the control conditions are numbered 1, 2 and 3, left to right and the test conditions are numbered 4, 5 and 6, left to right). This linkage and the false torsion predicted in each is described in the table 'ft_hat_tab'.</dc:description>
<dc:identifier>https://zenodo.org/record/4876635</dc:identifier>
<dc:identifier>10.5281/zenodo.4876635</dc:identifier>
<dc:identifier>oai:zenodo.org:4876635</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>doi:10.1167/19.11.10</dc:relation>
<dc:relation>doi:10.5281/zenodo.4876634</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:source>Journal of Vision 19(11)</dc:source>
<dc:subject>spatial vision</dc:subject>
<dc:subject>vision science</dc:subject>
<dc:subject>psychophysics</dc:subject>
<dc:subject>eye tracking</dc:subject>
<dc:title>Data from Murdison et al. (2019): Saccade-induced changes in ocular torsion reveal predictive orientation perception</dc:title>
<dc:type>info:eu-repo/semantics/other</dc:type>
<dc:type>dataset</dc:type>
</oai_dc:dc>

90
13
views