Readme for ET-DK2 eye-tracking dataset on 360 degree images. This dataset includes gaze data, a file to map img_index to specific stimuli, and an AOIs file. gaze.zip: Contains individual gaze data in the form S####.csv CSV files for each valid participant from the protocol described in the following works: David-John, B., Hosfelt, D., Butler, K., & Jain, E. (2021). A privacy-preserving approach to streaming eye-tracking data. IEEE Transaction on Visualization and Computer Graphics (TVCG) Special Issue on IEEE Virtual Reality and 3D User Interfaces (IEEE VR) 2021 John, B., Raiturkar, P., Le Meur, O., & Jain, E. (2018, December). A Benchmark of Four Methods for Generating 360° Saliency Maps from Eye Tracking Data. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) (pp. 136-139). IEEE. Each CSV file is named by participant, and the columns are as follows: SUB_ID,IMG_INDEX,IS_GRAY,GRAY_IDX,UNITY_TIMESTAMP,SMI_TIMESTAMP,GIW_X,GIW_Y,GIW_Z,GIW_TEXTURE_X,GIW_TEXTURE_Y,SCREEN_X,SCREEN_Y,HIW_X,HIW_Y,HIW_Z,HIW_TEXTURE_X,HIW_TEXTURE_Y,GAZE_DIR_X,GAZE_DIR_Y,GAZE_DIR_Z,HEAD_POS_X,HEAD_POS_Y,HEAD_POS_Z,HEAD_EUL_X,HEAD_EUL_Y,HEAD_EUL_Z A brief description of each field is below: -SUB_ID: Unique numeric subject identifier. -IMG_INDEX: Indicates which image is being viewed. Mapping between this index and the stimuli images, see img_index.csv. -IS_GRAY: A boolean indicating if participants were viewing a gray transition screen (True), or if they were actually viewing the image (False) -GRAY_IDX: Index representing which orientation the gray slide was in, i.e. horizontally which direction was the black cross used to center gaze in. These values range from 0 to 7, with each indicating a specific longitude. For our purposes we consider the width of an equirectangular image that spans 360 degrees, or 2pi radians. We give the left edge of the image (x = 0) the value of -180 deg(-pi), the middle column of the image 0 deg(0), and the right edge (x= img_width-1) the value of 180 deg (pi). To determine which direction was used you can use the index value with the following list [-135 -90 -45 0 45 90 135 180]. -UNITY_TIMESTAMP: Timestamp logged by Unity (seconds) -SMI_TIMESTAMP: Timestamp logged by the SMI eye tracker (nanoseconds) -GIW_{XYZ}: 3D gaze-in-world (GIW) direction vector within the world coordinate frame, i.e., the eye-in-head gaze vector transformed by current head rotation. -GIW_TEXTURE_{X,Y}: These are texture coordinates of gaze intersection with the 360 image, and are normalized between 0 and 1. Multiply X by the image width and Y by the image height to get absolute pixel values. -SCREEN_{XY}: X and Y position of gaze intersection with the HMD's display. The display size in pixels is 1920 (width) by 1080 (height). -HIW_{XYZ}: 3D vector describing the orientation of the head, i.e. rotation, as there is no translation within the environment. The head is pointing in the direction if this vector in world coordinates. -HIW_TEXTURE_{XY}: These are texture coordinates of head ray intersection with the 360 image, and are normalized between 0 and 1. Multiply X by the image width and Y by the image height to get absolute pixel values. -GAZE_DIR_{XYZ}: 3D eye-in-head (EIH) direction vector within the head coordinate frame, i.e., this is where the eyes are pointing relative to the user's field of view. -HEAD_POS_{XYZ}: 3D vector that contains the position of the user's Head/Camera within Unity. Note that these values may change while users move and complete the study, but within Unity the viewpoint was locked to a fixed perspective at the center of the 360 image. This would represent how they participant moved their head, but wouldn't reflect a change in perspective. -HEAD_EUL_{XYZ}: Euler angles in the respective dimension that represent the rotation of the head in 3D space, useful for transforming GAZE_DIR_XYZ into world coordinates. Note that gaze-in-world gaze directions are also provided (GIW_{XYZ}). img_index.csv: A CSV file that contains the mapping between our img_index and the stimuli. The stimuli named P# correspond to the Salient360! image dataset. The stimuli named C# refer to ten 360 images taken at real construction sites near Gainesville, Florida, however they cannot be shared publiclly without explicit permission. AOIs.csv: This file defines two rectangular AOIs per image in the dataset. min_x and min_y define the top left corner of the AOI in pixels, while max_x and max_y define the bottom right corner. The width and height in pixels of the source image is also provided.