In This Article Expand or collapse the "in this article" section Replication Initiatives in Psychology

  • Introduction
  • A Historical Perspective on Replication in Psychological Science
  • The Crisis of Confidence
  • Understanding Replications: From Conceptual to Close
  • Interpreting Replication Results
  • Controversies and Limitations
  • Early Collaborative Replication Attempts

Psychology Replication Initiatives in Psychology
by
Jennifer Bastart, Richard A. Klein, Hans IJzerman
  • LAST REVIEWED: 24 May 2018
  • LAST MODIFIED: 24 May 2018
  • DOI: 10.1093/obo/9780199828340-0212

Introduction

Replication is one key component toward a robust cumulative knowledge base. It plays a critical function in assessing the stability of the scientific literature. Replication involves closely repeating the procedure of a study and determining if the results are similar to the original. For decades, behavioral scientists were reluctant to publish replications. Reasons were epistemic and pragmatic. First of all, original studies were viewed as conclusive in most cases, and failures to replicate were often attributed to mistakes by the replicating researcher. In addition, failures to replicate may be caused by numerous factors. This inherent ambiguity made replications less desirable to journals. On the other hand, replication successes were expected and considered to contribute little beyond what was already known. Finally, editorial policies did not encourage the publication of replications, leaving the robustness of scientific findings largely unreported. A series of events ultimately led the research community to reconsider replication and research practices at large: the discovery of several cases of large-scale scientific misconduct (i.e., fraud); the invention and application of new statistical tools to assess strength of evidence; high-profile publications suggesting that some common practices may be less robust than previously assumed; failure to replicate some major findings of the field; and the creation of new, online tools aimed to promote transparency in the field. To deal with what is often regarded as the crisis of confidence, initiatives have been developed to increase the transparency of research practices, including (but not limited to) preregistration of studies; effect size predictions and sample size/power estimation; and, of course, replications. Replication projects themselves evolved in quality: from replications that were originally as small in sample as problematically small original studies to large-scale “Many Labs” collaborative projects. Ultimately, the development of higher-quality replication projects and open science tools has led (and will continue to lead) to a clearer understanding of human behavior and cognition and have contributed to a clearer distinction between exploratory and confirmatory behavioral science. The current article gives an overview of the history of replications, of the development of tools and guidelines, and of review papers discussing theoretical implications of replications.

A Historical Perspective on Replication in Psychological Science

Smith 1970 alerted the community to the lack of replication practices in psychology, and reminded the community of the importance of replication for cumulative science. More than forty years later, Makel, et al. 2012 reiterates this concern, showing that from 1900 to 2012, only 1.07 percent of the published research were replications. Scholars may have been reluctant to conduct and publish replications because of the difficulty in interpreting replication failures as well as the community’s preference for clear and easy patterns/conclusions from data. Giner-Sorolla 2012 for example describes how the preference for positive (i.e., significant) results and well-written narratives may have introduced new publication bias and decreased reproducibility. Greenwald 1975 explains how a bias against publishing null results may discourage researchers from pursuing research involving null hypotheses, resulting in an unrepresentative literature (e.g., a greater proportion of papers that erroneously reject the null). A publication in 2016 of a “replication” of Stapel and Semin 2007 made clear, however, that replications were often completed, yet went unpublished (IJzerman, et al. 2015). Nosek, et al. 2012 argues that the struggle between innovation and accumulation in science led scholars to neglect close replication practices. Preferences for clear patterns from data and difficulty interpreting replication data manifested in editorial policies, which, according to Neuliep and Crandall 1990, encourage the publication of new findings over replications.

  • Giner-Sorolla, R. 2012. Science or art? How aesthetic standards grease the way through the publication bottleneck but undermine science. Perspectives on Psychological Science 7.6: 562–571.

    DOI: 10.1177/1745691612457576

    This article criticizes scientific journals for favoring aesthetics of writing and for “clean” results. The authors show that emphasizing “clean” results over and beyond theoretical reasoning and methodological rigor leads scientists to favor the submission—and thus the publication—of flawed results.

  • Greenwald, A. G. 1975. Consequences of prejudice against the null hypothesis. Psychological Bulletin 82.1: 1–20.

    DOI: 10.1037/h0076157

    The author surveyed seventy-five social psychologists about their research and publication practices relating to null and/or positive results. Using these responses as input parameters, the author presents a mathematical model simulating the outcome of the research process and argues that publication of null results is necessary for a healthy scientific literature. The author argues that scholars should evaluate adequacy of procedure and importance of findings, rather than emphasizing only significant results. Available online.

  • IJzerman, H., N. F. E. Regenberg, J. Saddlemyer, and S. L. Koole. 2015. Perceptual effects of linguistic category priming: The Stapel and Semin (2007) paradigm revisited in twelve experiments. Acta Psychologica 157:23–29.

    DOI: 10.1016/j.actpsy.2015.01.008

    This paper reports twelve nonsignificant studies on “linguistic category priming” that were originally thought to be replications of studies in Stapel and Semin 2007, originally published in the Journal of Social and Personality Psychology. Even up until 2013 (after Stapel was caught for mass data fabrication and the original was retracted), Journal of Personality and Social Psychology did not accept these studies, as they were seen as replications.

  • Makel, M. C., J. A. Plucker, and B. Hegarty. 2012. Replications in psychology research: How often do they really occur? Perspectives on Psychological Science 7.6: 537–542.

    DOI: 10.1177/1745691612460688

    This paper investigates how many replications were conducted in psychology from 1900 to 2012. The authors searched for papers containing the word stem “replicat,” finding that only 1.6 percent of papers used terms like “replicate” or “replication.” Further examination revealed only ~1.07 percent were actual replications. Most of these published replications were successful.

  • Neuliep, J. W., and R. Crandall. 1990. Editorial bias against replication research. Journal of Social Behavior and Personality 5.4: 85–90.

    This article surveyed the opinions of journal editors in social and behavioral science. Globally, results indicate that editorial boards’ reluctance to publish replications may account for the lack of published replications before 1990.

  • Nosek, B. A., J. R. Spies, and M. Motyl. 2012. Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science 7.6: 615–631.

    DOI: 10.1177/1745691612459058

    This paper illustrates how publication practices favored novelty and positive results over replication and stability. The authors suggest a disconnect exists between scientific ideals and scientists’ interests, and this contributes to the current replication crisis. The authors propose strategies for better aligning research incentives with discovering truth.

  • Smith, N. 1970. Replication studies: A neglected aspect of psychological research. American Psychologist 25.10: 970–975.

    DOI: 10.1037/h0029774

    This paper discusses the research community’s lack of interest in replication and discusses reasons why researchers do not publish replications. Available online.

  • Stapel, D. A., and G. R. Semin. 2007. The magic spell of language: Linguistic categories and their perceptual consequences. Journal of Personality and Social Psychology 93.1: 23–33.

    DOI: 10.1037/0022-3514.93.1.23

    This article contains four studies showing that the level of language abstraction influences perceptual focus. The Levelt committee proved that these findings were based on fraudulent data created by the first author. Eventually dozens of papers authored by Stapel would be retracted. Available online. Retraction published in 2013, Journal of Personality and Social Psychology 104.1: 197.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login.

How to Subscribe

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here.

Article

Up

Down