In This Article Expand or collapse the "in this article" section Effect Size

  • Introduction
  • General Overviews
  • Power Analysis and Study Planning
  • Meta-Analysis

Psychology Effect Size
David B. Flora
  • LAST MODIFIED: 29 May 2019
  • DOI: 10.1093/obo/9780199828340-0247


Simply put, effect size (ES) is the magnitude or strength of association between or among variables. Effect sizes (ESs) are commonly represented numerically (i.e., as parameters for population ESs and statistics for sample estimates of population ESs) but also may be communicated graphically. Although the word “effect” may imply that an ES quantifies the strength of a causal association (“cause and effect”), ESs are used more broadly to represent any empirical association between variables. Effect sizes serve three general purposes: research results reporting, power analysis, and meta-analysis. Even under the same research design, an ES that is appropriate for one of these purposes may not be ideal for another. Effect size can be conveyed graphically or numerically using either unstandardized metrics, which are interpreted relative to the original scales of the variables involved (e.g., the difference between two means or an unstandardized regression slope), or standardized metrics, which are interpreted in relative terms (e.g., Cohen’s d or multiple R2). Whereas unstandardized ESs and graphs illustrating ES are typically most effective for research reporting, that is, communicating the original findings of an empirical study, many standardized ES measures have been developed for use in power analysis and especially meta-analysis. Although the concept of ES is clearly fundamental to data analysis, ES reporting has been advocated as an essential complement to null hypothesis significance testing (NHST), or even as a replacement for NHST. A null hypothesis significance test involves making a dichotomous judgment about whether to reject a hypothesis that a true population effect equals zero. Even in the context of a traditional NHST paradigm, ES is a critical concept because of its central role in power analysis.

General Overviews

The works included in this section are textbooks primarily intended for applied researchers and graduate students with some basic statistics training, including some experience with NHST. Each of these resources, however, tends to focus on standardized ESs with minimal presentation of unstandardized ESs. Cumming 2012 and Kline 2013 contextualize the value of ES calculation and interpretation given the limitations of NHST and point to the central role of ESs for a cumulative science utilizing meta-analyses. Ellis 2010 offers a similar broad discussion of ESs for the interpretation of original research results and the importance of ESs for power analysis and meta-analysis. Grissom and Kim 2012 and Rosenthal, et al. 2000 offer more comprehensive treatments of ES calculation across a wide variety of research designs.

  • Cumming, G. 2012. Understanding the new statistics: Effect sizes, confidence intervals and meta-analysis. New York: Routledge.

    Clear textbook describing the limitations of a NHST paradigm for original research and a cumulative science, while advocating for an increased focus on reporting and interpretation of ES. Emphasizes the importance of confidence intervals for conveying the uncertainty of a single ES statistic and meta-analyses of ESs for a cumulative science.

  • Ellis, P. D. 2010. The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge, UK: Cambridge Univ. Press.

    Non-technical introduction to effect sizes and their use in power analysis and meta-analyses.

  • Grissom, R. J., and J. J. Kim. 2012. Effect sizes for research: Univariate and multivariate applications. 2d ed. New York: Routledge.

    Pedagogically oriented textbook on ES calculation and interpretation. Comprehensive but focuses on standardized ESs for a variety of research designs. Covers parametric and non-parametric ESs, univariate and multivariate ESs.

  • Kline, R. B. 2013. Beyond significance testing: Statistics reform in the behavioral sciences. Washington, DC: American Psychological Association.

    Lays out the arguments against NHST but also addresses ES estimation for a range of circumstances. Also discusses replication and meta-analysis.

  • Rosenthal, R., R. L. Rosnow, and D. B. Rubin. 2000. Contrasts and effect sizes in behavioral research: A correlational approach. Cambridge, UK: Cambridge Univ. Press.

    Pedagogically oriented textbook focuses on calculation and interpretation of standardized ESs (mainly in the correlation metric), particularly ESs for specific comparisons (i.e., contrasts) as opposed to omnibus effects for larger research designs.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login.

How to Subscribe

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here.