Assessing replication: Lessons from empirical research and applied statistics

Abstract

Given how important replication is to the logic and rhetoric of science, one would expect a standard approach to designing and analyzing replication studies. However, as has been noted by various researchers, this is not the case. Not only is there no standard analysis, it appears that there is not even a clear-cut definition of what it means for studies to successfully replicate. Recent research has argued that meta-analysis provides a framework to formalize definitions of replication and analyze replication studies. In this framework, a study’s results are viewed as an underlying effect parameter, and replication would imply that parameters from different studies are similar in value. However, precisely how this is defined requires statistical and theoretical considerations, which will in turn affect analysis methods and their properties. This paper describes these considerations, outlines their statistical implications, and uses data from replication programs in the social sciences to shed light on how they might play out in practice.

Date
Jul 30, 2019
Location
Denver, CO
Avatar
Jacob M. Schauer
Postdoctoral Fellow

My research interests involve statistical methods for the social and health sciences.