Journal article Open Access

# What p -hacking really looks like: A comment on Masicampo and LaLande (2012)

Lakens, Daniël

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Lakens, Daniël</dc:creator>
<dc:date>2014-12-06</dc:date>
<dc:description>Masicampo and Lalande (2012; M&amp;L) assessed the distribution of 3627 exactly calculated p-values between 0.01 and 0.10 from 12 issues of three journals. The authors concluded that "The number of p-values in the psychology literature that barely meet the criterion for statistical significance (i.e., that fall just below .05) is unusually large". "Specifically, the number of p-values between .045 and .050 was higher than that predicted based on the overall distribution of p."
There are four factors that determine the distribution of p-values, namely the number of studies examining true effect and false effects, the power of the studies that examine true effects, the frequency of Type 1 error rates (and how they were inflated), and publication bias. Due to publication bias, we should expect a substantial drop in the frequency with which p-values above .05 appear in the literature. True effects yield a right-skewed p-curve (the higher the power, the steeper the curve, e.g., Sellke, Bayarri, &amp; Berger, 2001). When the null-hypothesis is true the p-curve is uniformly distributed, but when the Type 1 error rate is inflated due to flexibility in the data-analysis, the p-curve could become left-skewed below pvalues of .05.
M&amp;L (and others, e.g., Leggett, Thomas, Loetscher, &amp; Nicholls, 2013) model pvalues based on a single exponential curve estimation procedure that provides the best fit of p-values between .01 and .10 (see Figure 3, right pane). This is not a valid approach because p-values above and below p=.05 do not lie on a continuous curve due to publication bias. It is therefore not surprising, nor indicative of a prevalence of p-values just below .05, that their single curve doesn't fit the data very well, nor that Chi-squared tests show the residuals (especially those just below .05) are not randomly distributed.</dc:description>
<dc:identifier>https://zenodo.org/record/235811</dc:identifier>
<dc:identifier>10.1080/17470218.2014.982664</dc:identifier>
<dc:identifier>oai:zenodo.org:235811</dc:identifier>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:title>What p -hacking really looks like: A comment on Masicampo and LaLande (2012)</dc:title>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:type>publication-article</dc:type>
</oai_dc:dc>

398
208
views