Public Health Alternative Research Designs
by
Mariko Carey, Robert Sanson-Fisher
  • LAST REVIEWED: 15 June 2015
  • LAST MODIFIED: 23 February 2011
  • DOI: 10.1093/obo/9780199756797-0048

Introduction

While randomized controlled trials (RCTs) are considered the gold standard for intervention research, under some circumstances alternative research designs may be considered a practical and robust alternative for quantitative evaluation. Health outcomes and health behavior are influenced by a complex array of factors. These may include individual characteristics and social influences, as well as factors that relate to the environment in which the person lives and operates. Individual characteristics may relate to knowledge, skills, and attitudes. Social influence may include perceptions of normative behaviors, peer pressure, and social support structures. Environmental characteristics may relate to characteristics of the physical environment, laws or regulations, or policies that affect behavior in a given context. Because health behavior can be influenced by factors at all these levels, interventions may need to target all or many of these levels in order to successfully change behavior. As such, public health interventions often need to try to achieve change in whole populations (e.g., schools, communities, workplaces) rather than individual people. Where the unit of intervention is the population rather than the individual, there may be significant cost and logistical problems in accessing a sufficiently large sample to conduct an RCT. This has led to growing acknowledgment of the role of pragmatic research designs in evaluation of public health interventions. Alternative research designs to the RCT that may be useful for the evaluation of public health interventions include interrupted time series, multiple baseline, randomized encouragement, and regression discontinuity designs, among others. Selection of research design should take into account trade-offs between internal validity and external validity, cost, acceptability, and feasibility. The mixed methods approach uses quantitative and qualitative data collection simultaneously and combines them to produce answers to important questions. This is a complementary approach to experimental and quasi-experimental designs, but it constitutes the use of different data collection methodologies, not an alternative research design.

General Overviews

Nutbeam 1998 gives a useful introduction to the different purposes and audiences for evaluation research. In particular it highlights the need to match the study design to the research question. A cook’s tour of alternative research designs suitable for public health evaluations is provided by Mercer, et al. 2007. A more detailed and in-depth discussion of alternative research designs, including when and how they should be used, is given by Shadish, et al. 2002. Cook 2010 presents a viewpoint about methodological issues with respect to establishment of causality.

  • Cook Thomas D., Michael Scriven, Chris L. S. Coryn, and Stephanie D. H. Evergreen. 2010. Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation 31.1: 105–117.

    DOI: 10.1177/1098214009354918E-mail Citation »

    Written in a conversational style, this article presents a dialogue between Cook and Scriven discussing key controversies and topics in relation to evaluation designs to establish causation. Limitations of the RCT with respect to the type of research questions that can be addressed are discussed.

  • Mercer, Shawna L., Barbara J. DeVinney, Lawrence J. Fine, Lawrence W. Green, and Denise Dougherty. 2007. Study designs for effectiveness and translation research: Identifying trade-offs. American Journal of Preventive Medicine 33.2: 139–154.

    DOI: 10.1016/j.amepre.2007.04.005E-mail Citation »

    This article describes provides a valuable overview of several alternative research designs and their potential application to evaluation of health interventions. A description of each design is provided as well as examples of the types of circumstances under which they might be used.

  • Nutbeam, Don. 1998. Evaluating health promotion—Progress, problems and solutions. Health Promotion International 13.1: 27–44.

    DOI: 10.1093/heapro/13.1.27E-mail Citation »

    A general overview of the key challenges in evaluating health promotion activities is presented. A phased approach to evaluation in which research questions (and appropriate designs) may change as the health promotion program develops is recommended. Some description of alternative research designs and examples of when they have been implemented are provided.

  • Shadish, William R., Thomas D. Cook, and Donald T. Campbell. 2002. Experimental and quasi-experimental designs for generalized casual inference. Boston: Houghton Mifflin.

    E-mail Citation »

    This textbook provides a useful overview for researchers and students of the types of designs that can be used for establishing causal inference. Design characteristics and interpretation of designs such as the controlled before and after, regression discontinuity, and time series are described.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login.

How to Subscribe

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here.

Purchase an Ebook Version of This Article

Ebooks of the Oxford Bibliographies Online subject articles are available in North America via a number of retailers including Amazon, vitalsource, and more. Simply search on their sites for Oxford Bibliographies Online Research Guides and your desired subject article.

If you would like to purchase an eBook article and live outside North America please email onlinemarketing@oup.com to express your interest.

Article

Up

Down