Liking effect induced by gaze
Data provided by: José Luis Ulloa, Clara Marchetti, Marine Taffou & Nathalie George
This dataset resulted from a study aiming at investigating how gaze perception can influence preferences. Previous studies suggest that we like more the objects that are looked-at by others than non-looked-at objects (a so-called liking effect). We extended previous studies to investigate both abstract and manipulable objects. Participants performed a categorization task (for items that were cued or not by gaze). Next, participants evaluated how much they liked the items. We tested if the liking effect could be observed for non-manipulable (alphanumeric characters) as well as for manipulable items (common tools).
Participants were students at Heinrich-Heine-Universität Düsseldorf, Germany that were paid for participating or received course credit. Their ages ranged from 18 to 55 years. The words to-be-rated consisted of 30 typical members of 30 categories drawn from the updated Battig and Montague norms (Van Overschelde, Rawson, & Dunlosky, 2004).
Data included within this vignette.
No official publication, see citation below.
José Luis Ulloa, Clara Marchetti, Marine Taffou & Nathalie George (2014): Only your eyes tell me what you like: Exploring the liking effect induced by other’s gaze, Cognition & Emotion, DOI: 10.1080/02699931.2014.919899
Social attention; Gaze; Pointing gesture; Liking; Cueing.
CC-BY
Paris, France
Variable Name | Variable Description | Type (numeric, character, logical, etc.) |
---|---|---|
suj | Unique number assigned to each participant | Numeric |
congr | valid vs invalid | Character |
item | G, K, S, L | Character |
liking | rating response | Numeric |
In this dataset, there are valid and invalid cue-targeting variable. In valid cue-targeting condition, stimulus is on the same side of the gaze. In invalid cue-targeting condition, stimulus was on the opposite side of the gaze. We consider these two different conditions separately.
What the usual standard error for the data that could be considered for our stopping rule using the 40% decile?
### create subset for valid cue-targeting
DF_valid <- subset(DF, congr == "valid") %>%
group_by(suj, item) %>%
summarize(liking = mean(liking, na.rm = T)) %>%
as.data.frame()
#> `summarise()` has grouped output by 'suj'. You
#> can override using the `.groups` argument.
### create subset for invalid cue-targeting
DF_invalid <- subset(DF, congr == "invalid") %>%
group_by(suj, item) %>%
summarize(liking = mean(liking, na.rm = T)) %>%
as.data.frame()
#> `summarise()` has grouped output by 'suj'. You
#> can override using the `.groups` argument.
# individual SEs for valid cue-targeting condition
SE1 <- tapply(DF_valid$liking, DF_valid$item, function (x) { sd(x)/sqrt(length(x)) })
SE1
#> G K L S
#> 0.2013228 0.1779694 0.1801060 0.2286006
cutoff1 <- quantile(SE1, probs = .4)
cutoff1
#> 40%
#> 0.1843494
# individual SEs for invalid cue-targeting condition
SE2 <- tapply(DF_invalid$liking, DF_invalid$item, function (x) { sd(x)/sqrt(length(x)) })
SE2
#> G K L S
#> 0.1982333 0.1749820 0.1724613 0.2132725
cutoff2 <- quantile(SE2, probs = .4)
cutoff2
#> 40%
#> 0.1796323
# sequence of sample sizes to try
nsim <- 10 # small for cran
samplesize_values <- seq(25, 200, 5)
# create a blank table for us to save the values in
sim_table <- matrix(NA,
nrow = length(samplesize_values)*nsim,
ncol = length(unique(DF_valid$item)))
# make it a data frame
sim_table <- as.data.frame(sim_table)
# add a place for sample size values
sim_table$sample_size <- NA
sim_table$var <- "liking"
# make a second table for the second variable
sim_table2 <- matrix(NA,
nrow = length(samplesize_values)*nsim,
ncol = length(unique(DF_valid$item)))
# make it a data frame
sim_table2 <- as.data.frame(sim_table2)
# add a place for sample size values
sim_table2$sample_size <- NA
sim_table2$var <- "liking"
iterate <- 1
for (p in 1:nsim){
# loop over sample sizes for age and outdoor trait
for (i in 1:length(samplesize_values)){
# temp dataframe for age trait that samples and summarizes
temp_valid <- DF_valid %>%
dplyr::group_by(item) %>%
dplyr::sample_n(samplesize_values[i], replace = T) %>%
dplyr::summarize(se1 = sd(liking)/sqrt(length(liking)))
#
colnames(sim_table)[1:length(unique(DF_valid$item))] <- temp_valid$item
sim_table[iterate, 1:length(unique(DF_valid$item))] <- temp_valid$se1
sim_table[iterate, "sample_size"] <- samplesize_values[i]
sim_table[iterate, "nsim"] <- p
# temp dataframe for outdoor trait that samples and summarizes
temp_invalid <-DF_invalid %>%
dplyr::group_by(item) %>%
dplyr::sample_n(samplesize_values[i], replace = T) %>%
dplyr::summarize(se2 = sd(liking)/sqrt(length(liking)))
#
colnames(sim_table)[1:length(unique(DF_invalid$item))] <- temp_invalid$item
sim_table2[iterate, 1:length(unique(DF_invalid$item))] <- temp_invalid$se2
sim_table2[iterate, "sample_size"] <- samplesize_values[i]
sim_table2[iterate, "nsim"] <- p
iterate <- 1 + iterate
}
}
Calculate the cutoff score with information necessary for correction.
cutoff_valid <- calculate_cutoff(population = DF_valid,
grouping_items = "item",
score = "liking",
minimum = min(DF_valid$liking),
maximum = max(DF_valid$liking))
# same as above
cutoff_valid$cutoff
#> 40%
#> 0.1843494
cutoff_invalid <- calculate_cutoff(population = DF_invalid,
grouping_items = "item",
score = "liking",
minimum = min(DF_valid$liking),
maximum = max(DF_valid$liking))
cutoff_invalid$cutoff
#> 40%
#> 0.1796323
### for valid cue-targeting condition
final_sample_valid <-
sim_table %>%
pivot_longer(cols = -c(sample_size, var, nsim)) %>%
dplyr::rename(item = name, se = value) %>%
dplyr::group_by(sample_size, var, nsim) %>%
dplyr::summarize(percent_below = sum(se <= cutoff1)/length(unique(DF_valid$item))) %>%
ungroup() %>%
# then summarize all down averaging percents
dplyr::group_by(sample_size, var) %>%
summarize(percent_below = mean(percent_below)) %>%
dplyr::arrange(percent_below) %>%
ungroup()
#> `summarise()` has grouped output by
#> 'sample_size', 'var'. You can override using
#> the `.groups` argument.
#> `summarise()` has grouped output by
#> 'sample_size'. You can override using the
#> `.groups` argument.
flextable(final_sample_valid %>% head()) %>%
autofit()
sample_size | var | percent_below |
---|---|---|
25 | liking | 0.150 |
30 | liking | 0.250 |
35 | liking | 0.300 |
40 | liking | 0.425 |
45 | liking | 0.625 |
50 | liking | 0.825 |
Calculate the final corrected scores:
final_scores <- calculate_correction(proportion_summary = final_sample_valid,
pilot_sample_size = length(unique(DF$suj)),
proportion_variability = cutoff_valid$prop_var)
# only show first four rows since all 100
flextable(final_scores %>%
ungroup() %>%
slice_head(n = 4)) %>% autofit()
percent_below | sample_size | corrected_sample_size |
---|---|---|
82.5 | 50 | 44.43666 |
92.5 | 55 | 50.02481 |
92.5 | 55 | 50.02481 |
95.0 | 60 | 55.49695 |
### for valid cue-targeting condition
final_sample_invalid <-
sim_table2 %>%
pivot_longer(cols = -c(sample_size, var, nsim)) %>%
dplyr::rename(item = name, se = value) %>%
dplyr::group_by(sample_size, var, nsim) %>%
dplyr::summarize(percent_below = sum(se <= cutoff2)/length(unique(DF_invalid$item))) %>%
ungroup() %>%
# then summarize all down averaging percents
dplyr::group_by(sample_size, var) %>%
summarize(percent_below = mean(percent_below)) %>%
dplyr::arrange(percent_below) %>%
ungroup()
#> `summarise()` has grouped output by
#> 'sample_size', 'var'. You can override using
#> the `.groups` argument.
#> `summarise()` has grouped output by
#> 'sample_size'. You can override using the
#> `.groups` argument.
flextable(final_sample_invalid %>% head()) %>%
autofit()
sample_size | var | percent_below |
---|---|---|
25 | liking | 0.075 |
30 | liking | 0.250 |
35 | liking | 0.400 |
40 | liking | 0.600 |
45 | liking | 0.675 |
50 | liking | 0.825 |
Calculate the final corrected scores:
final_scores2 <- calculate_correction(proportion_summary = final_sample_invalid,
pilot_sample_size = length(unique(DF$suj)),
proportion_variability = cutoff_invalid$prop_var)
# only show first four rows since all 100
flextable(final_scores2 %>%
ungroup() %>%
slice_head(n = 4)) %>% autofit()
percent_below | sample_size | corrected_sample_size |
---|---|---|
82.5 | 50 | 44.78681 |
87.5 | 55 | 50.23831 |
92.5 | 60 | 55.67221 |
100.0 | 65 | 61.41225 |
Based on these simulations, we can decide our minimum sample size for 80% is likely close to 44 for the valid trials or 45 for the invalid trials, depending on rounding.
In this example, we could set our maximum sample size for 95% items below the criterion, which would equate to 55 for the valid trials or 61 for invalid trials. In this case, values are equal because the percent below jumps from 75% to 100%.
In any estimate for sample size for this study, the dataset has a large variance in ratings. This dataset need to more sample for items in each conditions. In fact, we experimented combining two conditions (valid & invalid cue-targeting) which did not result in any difference.