Suspect research and statistical inference

Abstract

Especially over the past two decades, the emphasis in education has been to implement policy backed by rigorous research. During that time, ‘rigorous research’ has become synonymous with randomized controlled trials and observational studies. Simultaneously, a replication crisis has arisen in science—and acutely in psychology. A prevailing opinion is that suspect research practices—e.g., p-hacking—contribute significantly to this crisis. This has eroded trust in research findings, and it raises questions about how to form policy given questionable evidence. These are difficult questions with no easy answers. But one useful approach assess the impact of suspect research practices (SRP) on statistical inferences and the policymaking calculus.

This paper examines the effect of an SRP called conditional data collection (CDC), which appears to be alarmingly prevalent in psychology research. CDC involves deciding to collect more data given some observation about extant data. At its most pernicious, this observation may be the the results of an interim analysis. For example, a researcher failing to find a significant result may recruit additional subjects for an experiment. This is a classic example of data snooping, and one that many researchers might condemn. However, CDC does not require such blatant p-hacking, and even seemingly benign behaviors can still lead to biased results.

Date
Mar 1, 2018
Location
Washington, DC
Avatar
Jacob M. Schauer
Assistant Professor

My research interests involve statistical methods for the social and health sciences.