Dataset Open Access

Data from Murdison et al. (2019): Saccade-induced changes in ocular torsion reveal predictive orientation perception

Murdison, T. Scott; Blohm, Gunnar; Bremmer, Frank

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.4876635</identifier>
      <creatorName>Murdison, T. Scott</creatorName>
      <givenName>T. Scott</givenName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="">0000-0002-7696-9702</nameIdentifier>
      <affiliation>Queen's University</affiliation>
      <creatorName>Blohm, Gunnar</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="">0000-0002-2297-3271</nameIdentifier>
      <affiliation>Queen's University</affiliation>
      <creatorName>Bremmer, Frank</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="">0000-0003-1597-7407</nameIdentifier>
      <affiliation>Philipps-Universität Marburg</affiliation>
    <title>Data from Murdison et al. (2019): Saccade-induced changes in ocular torsion reveal predictive orientation perception</title>
    <subject>spatial vision</subject>
    <subject>vision science</subject>
    <subject>eye tracking</subject>
    <date dateType="Issued">2019-09-01</date>
  <resourceType resourceTypeGeneral="Dataset"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsSupplementTo" resourceTypeGeneral="JournalArticle">10.1167/19.11.10</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4876634</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;&lt;strong&gt;Participants&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Eight adults with normal or corrected to normal vision performed the experiment (five males, three females; age range 20&amp;ndash;30 years). Participants were paid for their participation and were all na&amp;iuml;ve to the purpose of the experiment, and all had previous experience with psychophysical experiments involving video eye tracking. Each participant gave informed written consent prior to the experiment. All procedures used in this study conformed to the Declaration of Helsinki.&amp;nbsp;&lt;/p&gt;



&lt;p&gt;Stimuli were computer-generated using the Psychophysics Toolbox (Brainard,&amp;nbsp;1997) within MATLAB (MathWorks, Natick, MA), and were projected onto a large 120 cm (81&amp;deg;) &amp;times; 90 cm (65.5&amp;deg;) flat screen by means of a DS+6K-M Christie projector (Christie Digital, Cypress, California) at a frame rate of 120 Hz and a resolution of 1152 &amp;times; 864 pixels. Participants sat in complete darkness 70 cm away from the screen, and a table-mounted chin rest supported their heads. The complete darkness was required to prevent participants to perceive a compression of space, which might have confounded our data by causing all orientations being perceived closer to vertical than they were (Krekelberg, Kubischik, Hoffmann, &amp;amp; Bremmer,&amp;nbsp;2003; Lappe, Awater, &amp;amp; Krekelberg,&amp;nbsp;2000; Morrone, Ross, &amp;amp; Burr,&amp;nbsp;1997). Eye movements were recorded using an infrared video-based Eyelink II (SR Research, Ottawa, Ontario) that was attached to the chin rest, providing a table-fixed head strap that kept each participant&amp;#39;s head in a constant position throughout each experimental session. The screen was viewed binocularly, and eye position was sampled at 500 Hz. Prior to each block, participants performed a 13-point calibration sequence over a maximum eccentricity of 25&amp;deg;. The eye to which the perceptual stimulus was fovea-locked for each block was selected based on calibration performance. Drift correction was performed offline every 10 trials, based on a central fixation position. To ensure precise temporal measurement of trial start and stimulus presentation, we positioned a photosensitive diode over the lower left corner of the screen, where we flashed a white patch of pixels both at the start of each trial and at the presentation of the oriented bar stimulus (at the current on-screen gaze position of the participant). This part of the experimental apparatus was occluded from the view of the participant. After calibration for constant data acquisition delays, the photosensitive diode&amp;#39;s voltage spikes provided reliable estimates of each trial&amp;#39;s time-course (within a precision of approximately 2 ms).&amp;nbsp;&lt;/p&gt;



&lt;p&gt;Participants performed a two-alternative, forced choice (2AFC) perceptual task in which they made large horizontal saccades between targets 40&amp;deg; apart either along a 20&amp;deg; vertically eccentric horizontal axis (test trials) or along the horizontal meridian of the screen (control trials,&amp;nbsp;Figure 2A). Importantly, test trials induced ORT throughout the eye movement. Participants began each trial by fixating the initial 0.3&amp;deg; diameter dot on the left side of the screen (at &amp;minus;20&amp;deg;) and indicated with a key press that they were prepared to start the trial (Figure 2B). Three hundred milliseconds later, a 0.3&amp;deg; diameter target was illuminated 40&amp;deg; to the right on the opposite side of the screen (at +20&amp;deg;). After a randomly selected duration (400&amp;ndash;600 ms), the initial target was extinguished, representing the participant&amp;#39;s &amp;ldquo;go&amp;rdquo; cue. At some point in time, either immediately before saccade onset (&amp;sim;250 ms prior), during the saccade (average saccade duration &amp;sim;120 ms) or after the saccade, we presented an oriented bar stimulus in one of seven different orientations (from &amp;minus;8&amp;deg; to +8&amp;deg; rotated from vertical). For each trial, the exact time at which we presented the stimulus was chosen randomly from one of four 200 ms-width Gaussians, linearly spaced from the average reaction time (based on a 10-trial moving window) to 100 ms after, approximating the end of the movement. After the participant&amp;#39;s eyes had landed on the saccade target, they were asked to respond with a key press representing their perception of the stimulus orientation (counterclockwise or clockwise perceptions). The trial ended after participants made their selection. This paradigm allowed us to reliably compute each participant&amp;#39;s psychometric function with a fine time resolution throughout a saccade.&amp;nbsp;&lt;/p&gt;


&lt;p&gt;&lt;strong&gt;Fixation task&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Participants also performed a fixation version of the same task in which they fixated one of six randomly selected locations (&amp;minus;20&amp;deg;, 0&amp;deg; or +20&amp;deg; horizontal along either the 0&amp;deg; or 20&amp;deg; screen meridian) and we flashed the identical stimulus at the fixation location for a single frame. After the stimulus flash participants responded with a key press indicating their perception of its orientation, identically to the first experiment. In all conditions (fixation, test, and control condition), oriented bar stimuli were presented for a single frame (8.3 ms).&lt;/p&gt;


&lt;p&gt;&lt;strong&gt;Description of uploaded datasets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are two datasets uploaded here from two experiments (saccade task and fixation task) from the cited paper (Murdison et al. 2019, Journal of Vision: &lt;a href=""&gt;;/a&gt;). The saccade task data is labeled &amp;#39;EyeMovementData.mat&amp;#39; (or *.json or*.csv)&amp;nbsp;and the fixation task data is labeled &amp;#39;FixationData.mat&amp;#39;. We will describe the data structure within each here.&lt;/p&gt;


&lt;p&gt;Within this .mat file, there are two types of data represented. The first dimensions of these datasets correspond to one another:&lt;/p&gt;

&lt;p&gt;(1) time series data within &amp;#39;subject_timeseries_data&amp;#39;, encoded as a Matlab structure. This&amp;nbsp;includes 2D (X, Y) eye tracking data from every trial, with position, velocity and acceleration for both the entire trials (i.e., &amp;#39;eyeX&amp;#39;, &amp;#39;eyeY&amp;#39;, &amp;#39;eyeXv&amp;#39;, &amp;#39;eyeYv&amp;#39;, &amp;#39;eyeXa&amp;#39;, &amp;#39;eyeYa&amp;#39;) and for the isolated primary saccade from each trial (i.e., &amp;#39;saccX&amp;#39;, &amp;#39;saccY&amp;#39;, etc.). Also included within this dataset is the presented oriented bar&amp;nbsp;stimulus&amp;#39; 2D presentation vertices for each trial (which was gaze-contingent; i.e., &amp;#39;barXpos&amp;#39; and &amp;#39;barYpos&amp;#39;).&amp;nbsp;&lt;/p&gt;

&lt;p&gt;(2) extracted trial data within &amp;#39;subject_trial_data&amp;#39;, encoded as a Matlab table. This includes several trial-by-trial parameters that allows the user to reconstruct each trial&amp;#39;s timing and position data. The table headings describe both the variable definition and its units. For most variables, these units are in degrees of visual angle (deg), degrees per second (degPerS), degrees per second squared (degPerSSq), or time (s). A few entries are categorical; for example &amp;#39;isBadTrial&amp;#39; represents the true/false (1 or 0)&amp;nbsp;trials marked as &amp;#39;bad&amp;#39; according to the exclusion criteria described within the report. Also &amp;#39;subjectResponse_1L2R3Err&amp;#39; represents the participant response as to the direction of the perceived stimulus orientation (left = 1, right = 2, or err = 3 for erroneous entries).&amp;nbsp;It is this table that has also been exported as an encoded JSON file and also a CSV file.&amp;nbsp;&lt;/p&gt;


&lt;p&gt;&amp;#39;subject_data&amp;#39; is a Matlab structure containing the entire dataset from the fixation task described within the paper. Each of this structure&amp;#39;s 8 elements represents a different subject (see &amp;#39;subID&amp;#39; field), and contains several other data fields, including per trial target tilt (&amp;#39;tilt&amp;#39;), target center X and Y positions (&amp;#39;tarX&amp;#39; and &amp;#39;tarY&amp;#39;), participant response (&amp;#39;resp&amp;#39;, encoded as -1 for negative tilt percepts and +1 for positive tilt percepts), and target conditions, including false torsion predictions for each (&amp;#39;cond&amp;#39; and &amp;#39;ft_hat_tab&amp;#39;). Conditions are numbered from 1 to 6, for the left&amp;nbsp;to&amp;nbsp;right, on-axis, then oblique eye positions (e.g., such that the control conditions are numbered 1, 2 and 3, left to right and the test conditions are numbered 4, 5 and 6, left to right). This linkage and the false torsion predicted in each is described in the table &amp;#39;ft_hat_tab&amp;#39;.&lt;/p&gt;</description>
All versions This version
Views 9090
Downloads 1313
Data volume 7.8 GB7.8 GB
Unique views 8383
Unique downloads 66


Cite as