Jump to Content Jump to Main Navigation

Social Work Experimental and Quasi-Experimental Designs
by
Matthew Morton, Paul Montgomery

Introduction

In strengthening social work’s ability to improve lives and communities, experimental design can play a critical role in helping stakeholders better understand what works in achieving positive impacts. Experimental design studies aim to test whether a specific “intervention” (or “treatment”) causes change in specific outcomes. Experiments test for this cause-and-effect relationship by exposing a group of research participants to the intervention and observing for any differences in changes of behavior between the intervention group and another group that does not receive the intervention. The group that does not receive the intervention is typically called a “control” or “comparison” group. Notably some literature reserves the term “experimental design” for studies in which participants are randomly assigned to intervention or control groups. Other literature, however, defines the term more broadly to include what some would classify as “quasi-experimental” or “nonrandomized” trials in which an intervention is applied to one group in order to detect changes but assignment to groups occurs through a method of selection other than randomization. This bibliography will consider experimental design in the broader context of both randomized and nonrandomized trials, but it will also supply references that clarify the special ability of randomized controlled trials to reduce bias and strengthen the credibility of experimental findings that guarantee causality. The field of experimental design includes considerable diversity with respect to specific methods, applications, and perspectives. This bibliography aims to organize some of the foremost texts and papers concerning experimental design to provide readers with (a) useful introductions to experimental design and basic principles, (b) practical references for specific audiences or topics of interest, and (c) a rounded tour of the views and debates surrounding experimental design.

Introductory Works

This section presents texts and papers that aim to introduce the purpose and principles of experimental design to a wider audience. Chalmers 2003 provides a good first read that articulates cause for the evidence-based practice movement from which experimental design has gained increasing momentum. Rubin and Babbie 2008, particularly chapter 10, offers an introduction to experimental design and critical concepts with the intention of reaching a social work student audience. Baker 2000 provides similar material applied for use by developmental impact researchers. Eccles, et al. 2003 and Kendall 2003, though geared toward a health care readership, provide useful summaries of key concepts in experimental design for unfamiliar readers. Oakley, et al. 2003, Rosen, et al. 2006, and Sibbald and Roland 1998 articulate nontechnical cases for general audiences for the applicability and value of experimental design. For the advanced student, Kirk 2003 is a most useful text, as it provides a more sophisticated presentation of the topic area.

  • Baker, Judy L. 2000. Evaluating the impact of development projects on poverty: A handbook for practitioners. Washington, DC: World Bank.

    Save Citation »Export Citation »E-mail Citation »

    Available free online, this handbook offers a user-friendly overview of the impact of evaluation issues and approaches in which experimental design is often situated. Different types of experimental and quasi-experimental designs are discussed.

    Find this resource:

  • Chalmers, Iain. 2003. Trying to do more good than harm in policy and practice: The role of rigorous, transparent, up-to-date evaluations. Annals of the American Academy of Political and Social Science 589.1: 22–40.

    DOI: 10.1177/0002716203254762Save Citation »Export Citation »E-mail Citation »

    Chalmers articulates a case for increasing the development and use of rigorous, transparent, and up-to-date experimental designs to improve the processes by which we make decisions about whether and how to intervene in the lives of others. He further argues for systematically reviewing the state of research on a given topic prior to initiating new trials.

    Find this resource:

  • Eccles, Martin, Jeremy Grimshaw, Marion Campbell, and Craig Ramsay. 2003. Research designs for studies evaluating the effectiveness of change and improvement strategies. Quality and Safety in Health Care 12.1: 47–52.

    DOI: 10.1136/qhc.12.1.47Save Citation »Export Citation »E-mail Citation »

    This article briefly surveys different kinds of experimental designs for evaluation of more complex, behavioral interventions and in doing so introduces readers to key concepts and terms.

    Find this resource:

  • Kendall, Jonathan M. 2003. Designing a research project: Randomised controlled trials and their principles. Emergency Medicine Journal 20.2: 164–168.

    DOI: 10.1136/emj.20.2.164Save Citation »Export Citation »E-mail Citation »

    This article provides a basic, nontechnical summary introduction to the features and applicability of randomized designs for an unfamiliar audience.

    Find this resource:

  • Kirk, Roger E. 2003. Experimental design. In Handbook of psychology, Vol. 2, Research methods in psychology. Edited by John A. Schinka, and Wayne F. Velicer, 3–32. Hoboken, NJ: Wiley.

    Save Citation »Export Citation »E-mail Citation »

    Though brief, this overview introduces readers to more complex categories of experimental design (for example, hierarchical designs in which multiple treatments are nested within each other) relevant to readers interested in a more advanced introduction to approaches. Kirk characterizes experimental design by random assignment of participants.

    Find this resource:

  • Oakley, Ann, Vicki Strange, Tami Toroyan, Meg Wiggins, Ian Roberts, and Judith Stephenson. 2003. Using random allocation to evaluate social interventions: Three recent U.K. examples. Annals of the American Academy of Political and Social Science 589.1: 170–189.

    DOI: 10.1177/0002716203254765Save Citation »Export Citation »E-mail Citation »

    Oakley and colleagues argue for the applicability of robust experimental design to social interventions as it has been popularly used in health and medicine. The paper provides three examples of randomized controlled trials with social interventions in the United Kingdom to illustrate strategies for conducting successful experimental trials.

    Find this resource:

  • Rosen, Laura, Orly Manor, Dan Engelhard, and David Zucker. 2006. In defense of the randomized controlled trial for health promotion research. American Journal of Public Health 96.7: 1181–1186.

    DOI: 10.2105/AJPH.2004.061713Save Citation »Export Citation »E-mail Citation »

    This paper discusses the value of experimental design in evaluating health promotion interventions and responds to common criticisms of experimental design with suggestions for tailoring strategies and approaches to meet different conditions rather than abandoning experimental design altogether.

    Find this resource:

  • Rubin, Allen, and Earl R. Babbie. 2008. Research methods for social work. 6th ed. Belmont, CA: Thomson Brooks Cole.

    Save Citation »Export Citation »E-mail Citation »

    This textbook, which can serve as a general textbook for graduate and upper-level undergraduate social work students on research methods, dedicates chapter 10 to experimental design, which could be used as an introductory read to the topic. Unique to this edition from previous versions, the authors make explicit links to the material throughout the book to the evidence-based practice movement.

    Find this resource:

  • Sibbald, Bonnie, and Martin Roland. 1998. Understanding controlled trials: Why are randomised controlled trials important? British Medical Journal 316.7126: 201.

    Save Citation »Export Citation »E-mail Citation »

    This brief note discusses the features of experimental design that make it useful and authoritative for evaluating intervention impacts.

    Find this resource:

Textbooks

The sources in this section provide companion texts for undergraduate or graduate courses that teach experimental design in the curriculum. Shadish, et al. 2001 should serve as the first-stop essential textbook, as it is a well-rounded and authoritative resource on the topic area. In several cases, textbooks were selected because they are good guides for students of experimental design within their respective subject areas: Grinnell and Unrau 2008 for social work and Harris 2008 for psychology. Other textbooks are geared more generally within the behavioral sciences. Maxwell and Delaney 2003 and Ryan 2007 are both examples of more generally intended texts that offer practical tools for aiding the teaching process, particularly Maxwell and Delaney 2003 with its complementary CD resource. Field and Hole 2003 is a basic text for students that can serve as a good complement to an introductory statistical course or review course, which might also use Field 2009, a more statistically oriented teaching text. Wu and Hamada 2000 is a more technical text suitable for graduate students and experienced practitioners interested in an advanced reference on experimental design and analysis.

  • Field, Andy. 2009. Discovering statistics using SPSS. 3d ed. Los Angeles: SAGE.

    Save Citation »Export Citation »E-mail Citation »

    A companion text to Field and Hole 2003, this is an easier to use guide to conducting statistical operations with all aspects of experimental designs. It is useful for all levels of students and researchers. It provides good supplementary web-based materials and exercises for PASW/SPSS (Predictive Analytics Software/Statistical Package for the Social Sciences).

    Find this resource:

  • Field, Andy, and Graham Hole. 2003. How to design and report experiments. London: SAGE.

    Save Citation »Export Citation »E-mail Citation »

    This text uses a definition of experimental design that is wider than that used by this bibliography. Experiments are considered as studies that manipulate parts of the environment and observe its effect. The book integrates simple pictures and humor, providing a guide through elementary design, statistical tools for analysis, and reporting of experiments, which requires minimal technical competencies.

    Find this resource:

  • Grinnell, Richard M., Jr., and Yvonne A. Unrau, eds. 2008. Social work research and evaluation: Foundations of evidence-based practice. 8th ed. New York: Oxford Univ. Press.

    Save Citation »Export Citation »E-mail Citation »

    This book could be used as the main text for an advanced undergraduate course or a graduate-level research methods course for social work students. It deals with evidence-based practice and research methods holistically. Thus students will learn about experimental design within the context of other methods, designs, and issues involved with evidence-based practice in social work.

    Find this resource:

  • Harris, Peter. 2008. Designing and reporting experiments in psychology. 3d ed. Buckingham, UK: Open Univ. Press.

    Save Citation »Export Citation »E-mail Citation »

    Geared toward undergraduate psychology students, the text is a user-friendly reference that walks students through the design and reporting of an experimental study and flags common errors in performing and reporting experimental trials.

    Find this resource:

  • Maxwell, Scott E., and Harold D. Delaney. 2003. Designing experiments and analyzing data: A model comparison perspective. 2d ed. Mahwah, NJ: Lawrence Erlbaum.

    Save Citation »Export Citation »E-mail Citation »

    This text is aimed at the behavioral sciences. It uses real research examples to illustrate lessons and helps readers consider different techniques for design and analysis as appropriate with the context. The book includes a computer CD with SPSS (Statistical Package for the Social Sciences) and statistical analysis software (SAS) datasets to complement the text as well as tutorials reviewing basic statistics and regression.

    Find this resource:

  • Ryan, Thomas P. 2007. Modern experimental design. Hoboken, NJ: Wiley-Interscience.

    DOI: 10.1002/0470074353Save Citation »Export Citation »E-mail Citation »

    This book is intended for a graduate-level course in experimental design. The text weaves in many examples to illustrate teaching points and discusses various relevant software programs. Ryan provides a comprehensive text on experimental design that begins by introducing the approach and its merits to a novice reader and thereafter extending deeper into a range of more complicated design and analysis options.

    Find this resource:

  • Shadish, William R., Thomas D. Cook, and Donald T. Campbell. 2001. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Save Citation »Export Citation »E-mail Citation »

    This book updates Donald T. Campbell and Julian Stanley, “Experimental and Quasi-Experimental Designs for Research and Teaching,” in Handbook of Research on Teaching, edited by N. L. Gage (Chicago: Rand McNally, 1963), a classic, providing an important textbook on the subject building on previous authors’ achievements. The text is a thorough, well-rounded guide through theoretical matters underpinning experimental design, rationale for using different methods, designing experiments, and considerations for generalized causal inference.

    Find this resource:

  • Wu, C. F. Jeff, and Michael Hamada. 2000. Experiments: Planning, analysis, and parameter design optimization. Hoboken, NJ: Wiley-Interscience.

    Save Citation »Export Citation »E-mail Citation »

    This text constitutes an advanced text reference for experienced experimental design practitioners. It is not appropriate as an introductory textbook to experimental design but is a useful reference for researchers concerned with more complicated aspects of experiments.

    Find this resource:

Manuals and Guides

Manuals and guides offer step-by-step instructions supporting researchers through the process or part of the process of conducting experimental design. Shadish, et al. 2001 is an essential, well-rounded resource on the various aspects of experimental and quasi-experimental designs. Shadish, et al. 2001 updates Campbell and Stanley 1963. Novice researchers looking to plan and implement a community-based randomized controlled trial will find a useful resource in Solomon, et al. 2009, an Oxford Pocket Guide that walks readers through each stage of the process. Keppel and Wickens 2004 uses many illustrative examples and provides a fluid guide through the stages of experimental design as it is written by the same authors throughout. Nezu and Nezu 2008 is also a useful addition to the field and offers perspectives from multiple authors covering a wide range of issues important for experimental design, including ethical considerations. Rossi, et al. 2004 situates an introduction to experimental design in the context of a systematic approach to evidence-based practice. Campbell and Stanley 1963 is a classic that has helped shape the field, and though it does not cover late-20th- and early 21st-century issues and advances in experimental design, it is still a useful and oft-consulted reference for researchers.

  • Campbell, Donald T., and Julian Stanley. 1963. Experimental and quasi-experimental designs for research. Houghton Miffin (Academic).

    Save Citation »Export Citation »E-mail Citation »

    This guide has been regarded as a classic text on experimental design in social research and, though old, still constitutes a useful reference for students and researchers.

    Find this resource:

  • Keppel, Geoffrey, and Thomas D. Wickens. 2004. Design and analysis: A researcher’s handbook. 4th ed. Upper Saddle River, NJ: Prentice Hall.

    Save Citation »Export Citation »E-mail Citation »

    This handbook provides a basic guide through the process of designing and analyzing experiments that is accessible to students and researchers with little or no formal statistical training.

    Find this resource:

  • Nezu, Arthur M., and Christine Maguth Nezu, eds. 2008. Evidence-based outcome research: A practical guide to conducting randomized controlled trials for psychosocial interventions. New York: Oxford Univ. Press.

    Save Citation »Export Citation »E-mail Citation »

    Nezu and Nezu’s text will support researchers through the process of designing experiments for psychosocial interventions, recruitment, analyzing data, reporting studies, and addressing ethical considerations.

    Find this resource:

  • Rossi, Peter H., Mark W. Lipsey, and Howard E. Freeman. 2004. Evaluation: A systematic approach. Thousand Oaks, CA: SAGE.

    Save Citation »Export Citation »E-mail Citation »

    While this text might just as well be used as a student textbook on evaluation, it would serve as a valuable hands-on reference to anyone interested in conducting experiments as well as understanding key concepts and how experiments can fit into the context of a systematic, evidence-based approach to programming and evaluation. Chapter 8 focuses on randomized experiments, and chapter 9 describes alternative quasi-experimental designs.

    Find this resource:

  • Shadish, William R., Thomas D. Cook, and Donald T. Campbell. 2001. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Save Citation »Export Citation »E-mail Citation »

    This book updates the Campbell and Stanley 1963 classic, providing an important textbook on the subject building off of previous authors’ achievements. The text gives a thorough, well-rounded guide through theoretical matters underpinning experimental design, rationale for using different methods, designing experiments, and considerations for generalized causal inference.

    Find this resource:

  • Solomon, Phyllis, Mary M. Cavanaugh, and Jeffrey Draine. 2009. Randomized controlled trials: Design and implementation for community-based psychosocial interventions. New York: Oxford Univ. Press.

    Save Citation »Export Citation »E-mail Citation »

    This member of the Oxford Pocket Guide series supports readers from the beginning stages of planning a randomized controlled trial (RCT) in a community context, through implementing the design, and finally making sense of the results. The book serves as a good, focused text for carrying out randomized controlled trials in social work–related fields. A fast and easy read, it makes a challenging but important area of evaluative research accessible to those new to the topic.

    Find this resource:

Software

Two general categories of software tools are typically used with experimental design: tools for computer-generated random assignment of participants to treatment and control groups and those for analyzing data from experimental trials. The Bland 2004 directory highlights a number of software and web-based tools for random assignment, although it does not include the updated site SealedEnvelope for randomization services and power and sample size calculations. The Schoenfeld website also provides a free online sample size calculator; proper conduct and reporting of sample size calculation in the planning stages of an evaluation is an essential step for high-quality experimental research. The Minitab and Stat-Ease programs are among several commercial software packages specifically developed to aid the design and analysis of experiments (though popular general statistical packages, such as statistical analysis software (SAS), SPSS (Statistical Package for the Social Sciences), and Stata, will perform basic statistical functions for data analysis).

History of Experimental Design

Experimental design lies at the heart of much of the discourse on formative research in health and social sciences. Fisher 1935, Cochran and Cox 1992, Campbell and Stanley 1963, and Cochrane 1972 are especially pivotal historical contributions to the evidence-based practice movement in which discussion of experimental design is often situated. Fisher 1935 laid the statistical foundation, while Cochran and Cox 1992 and Campbell and Stanley 1963 each substantially helped move social research by applying experimental methods in important directions. Cochrane 1972 helped launch evidence-based health care via rigorous experimental design methodology, and this contribution went on to influence social research substantially. Cook and Campbell 1979 is also an enduring authoritative text, and it provides a good articulation of the historical roots of experimentation in its first chapter. Oakley 1998 examines undulations of perceptions and applications of experimental design, especially in the United States. Good examples of uptake of good experimental design by the housing and social welfare field are demonstrated in Krieger and Higgins 2002 and Howden-Chapman, et al. 2007. In the five years from 2002 until 2007 the evidence base moved from largely descriptive designs to a sophisticated cluster randomized trial.

  • Campbell, Donald T., and Julian Stanley. 1963. Experimental and quasi-experimental designs for research. Houghton Mifflin (Academic).

    Save Citation »Export Citation »E-mail Citation »

    This guide has been regarded as a classic text on experimental design in social research and, though old, still constitutes a useful reference for students and researchers.

    Find this resource:

  • Cochran, William G., and Gertrude M. Cox. 1992. Experimental designs. 2d ed. Hoboken, NJ: Wiley.

    Save Citation »Export Citation »E-mail Citation »

    First published in 1957, this book provided an early review of technical issues involved with experimental design.

    Find this resource:

  • Cochrane, Archie L. 1972. Effectiveness and efficiency: Random reflections on health services. London: Nuffield Provincial Hospitals Trust.

    Save Citation »Export Citation »E-mail Citation »

    Cochrane and this book were pivotal in the advances of experimental design. These advances catalyzed the evidence-based medicine movement, leading to research institutions, such as the Cochrane Collaboration, which was named in Cochrane’s honor.

    Find this resource:

  • Cook, Thomas D., and Donald T. Campbell. 1979. Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.

    Save Citation »Export Citation »E-mail Citation »

    This work constitutes a significant contribution to the development of quasi-experimental design and a useful guide. The text’s first chapter is also devoted to illustrating the historical context of experimentation in which quasi-experimental design is located.

    Find this resource:

  • Fisher, Ronald A. 1935. The design of experiments. 1st ed. London: Oliver and Boyd.

    Save Citation »Export Citation »E-mail Citation »

    Fisher, a revolutionary statistician, is virtually always cited in reviews of the origins of modern experimental design.

    Find this resource:

  • Howden-Chapman, Philippa, Anna Matheson, Julian Crane, Helen Viggers, Malcolm Cunningham, Tony Blakely, Chris Cunningham, Alistair Woodward, Kay Saville-Smith, Des O’Dea, Martin Kennedy, Michael Baker, Nick Waipara, Ralph Chapman, and Gabrielle Davie. 2007. Effect of insulating existing houses on health inequality: Cluster randomised study in the community. British Medical Journal 334.7591: 460–468.

    DOI: 10.1136/bmj.39070.573032.80Save Citation »Export Citation »E-mail Citation »

    In contrast to Krieger and Higgins 2002, Howden-Chapman, et al. provides an example of how a field can advance both professionally and experimentally over a short period of time when researchers apply high-quality experimental design to an area of human welfare.

    Find this resource:

  • Krieger, James, and Donna L. Higgins. 2002. Housing and health: Time again for public health action. American Journal of Public Health 92.5: 758–768.

    DOI: 10.2105/AJPH.92.5.758Save Citation »Export Citation »E-mail Citation »

    This review summarizes the dearth of housing and health research at the time of its publication. See Howden-Chapman, et al. 2007 for contrast.

    Find this resource:

  • Oakley, Ann. 1998. Experimentation and social interventions: A forgotten but important history. British Medical Journal 317.7167: 1239–1242.

    Save Citation »Export Citation »E-mail Citation »

    This paper considers general trends in the late 20th century in the use and perceptions of experimental design for evaluating social policy and interventions.

    Find this resource:

Appraising Experiments

The references in this section will support consumers of research in critically appraising the quality and findings of experimental trials. The Bandolier website provides an easy introductory essay explaining what critical appraisal is to the novice researcher or knowledge consumer. Jüni, et al. 2001 warns against a reliance on simple checklists of trial quality, arguing that such instruments can be overly simplistic. Such instruments, however, do provide a starting point for critical appraisal, and the Centre for Evidence Based Medicine (CEBM) and the Public Health Research Unit (PHRU) provide brief critical appraisal skills program (CASP) forms on a range of trial designs that readers can download for free from the Internet for such purposes. Likewise the AMSTAR (Assessment of Multiple Systematic Reviews) statement in Shea, et al. 2007 provides a checklist for appraising systematic reviews, which often synthesize and meta-analyze experimental and quasi-experimental studies. Rothwell 2005 guides readers through appraising the generalizability of findings from experimental trials to other contexts and populations.

  • Bandolier: Evidence Based Thinking about Health Care.

    Save Citation »Export Citation »E-mail Citation »

    The Bandolier website gives a consumers’ perspective into critical appraisal and evidence-based practice concerning health care. Those looking for examples of appraisal intended to help consumers and practitioners in evidence-based decision making should explore the site’s Knowledge Library. An easy introductory paper on critical appraisal along with other relevant subjects can be found in the Learning Zone.

    Find this resource:

    • Centre for Evidence Based Medicine (CEBM).

      Save Citation »Export Citation »E-mail Citation »

      The Centre for Evidence Based Medicine (CEBM) provides short, user-friendly forms to aid critical appraisal of experimental trials and other kinds of studies.

      Find this resource:

      • Jüni, Peter, Douglas G. Altman, and Matthias Egger. 2001. Systematic reviews in health care: Assessing the quality of controlled clinical trials. British Medical Journal 323.7303: 42–46.

        DOI: 10.1136/bmj.323.7303.42Save Citation »Export Citation »E-mail Citation »

        Jüni and colleagues discuss the implications of varying quality in experimental trials, and while discouraging the use of summary score scales for measuring the quality of trials in systematic reviews, the authors address some of the most important considerations in assessing study quality.

        Find this resource:

      • Public Health Research Unit (PHRU).

        Save Citation »Export Citation »E-mail Citation »

        This National Health Service (NHS) website provides several downloadable user-friendly forms to aid critical appraisal of experimental trials and other kinds of studies.

        Find this resource:

        • Rothwell, Peter M. 2005. External validity of randomised controlled trials: “To whom do the results of this trial apply?” Lancet 365.9453: 82–93.

          DOI: 10.1016/S0140-6736(04)17670-8Save Citation »Export Citation »E-mail Citation »

          This paper draws readers’ attention to aspects of an experimental design that will help them assess the generalizability of the findings.

          Find this resource:

        • Shea, Beverley J., Jeremy M. Grimshaw, George A. Wells, Maarten Boers, Neil Andersson, Candyce Hamel, Ashley C. Porter, Peter Tugwell, David Moher, and Lex M. Bouter. 2007. Development of AMSTAR: A measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 7.1: 10.

          DOI: 10.1186/1471-2288-7-10Save Citation »Export Citation »E-mail Citation »

          Increasingly systematic reviews are used to summarize the state of the evidence on a particular research question by compiling and analyzing relevant experimental and quasi-experimental designs. Like experimental studies, systematic reviews also vary in terms of quality and reliability. As such, a group of scholars developed AMSTAR (Assessment of Multiple Systematic Reviews), which is a tested, thirty-seven-item checklist, to help researchers and practitioners appraise the methodological quality of systematic reviews.

          Find this resource:

        Bias

        One of the most important qualities of a strong experimental design is its capacity to reduce the influence of bias on the evaluative findings. The greater the chance of bias in the design, the greater the chance that results are attributable to something other than the intervention. Delgado-Rodríguez and Llorca 2004 reviews common forms of bias and organizes them according to the types of studies in which they are found; this should be a first-stop reference for readers new to the concept of bias. Forder, et al. 2005 and Schulz, et al. 1995 are selections from a growing body of literature pointing particularly to the evidence and considerations of bias surrounding allocation concealment and blinding in experimental designs. Chan, et al. 2004, a paper on selective reporting of outcomes, and Hollis and Campbell 1999, a discussion of intention to treat analysis, both raise important issues of bias that can affect the interpretation and general application of research findings in policy and practice decisions. Puffer, et al. 2003 addresses concerns of bias specific to cluster-based experiments.

        • Chan, An-Wen, Asbjørn Hróbjartsson, Mette T. Haahr, Peter C. Gøtzsche, and Douglas G. Altman. 2004. Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. Journal of the American Medical Association 291.20: 2457–2465.

          DOI: 10.1001/jama.291.20.2457Save Citation »Export Citation »E-mail Citation »

          This paper highlights tendencies of reporting bias, typically by which investigators selectively report outcomes that favor the intervention over those that do not. Reporting bias leaves readers of trials with an exaggeratedly positive impression of overall intervention effects.

          Find this resource:

        • Delgado-Rodríguez, Miguel, and Javier Llorca. 2004. Bias. Journal of Epidemiology and Community Health 58.8: 635–641.

          DOI: 10.1136/jech.2003.008466Save Citation »Export Citation »E-mail Citation »

          This paper provides an overview of the most common forms of bias affecting experimental trials and other forms of evaluative research. A useful table organizes types of biases and the types of study designs they affect.

          Find this resource:

        • Forder, Peta M., Val J. Gebski, and Anthony C. Keech. 2005. Allocation concealment and blinding: When ignorance is bliss. Medical Journal of Australia 182.2: 87–89.

          Save Citation »Export Citation »E-mail Citation »

          Forder and colleagues discuss how inadequate concealment of the random allocation process and of the group to which study participants belong (treatment or control group) can bias the results. To this end the authors further argue that allocation and “blinding” should be explicitly reported in study write-ups.

          Find this resource:

        • Hollis, Sally, and Fiona Campbell. 1999. What is meant by intention to treat analysis? Survey of published randomised controlled trials. British Medical Journal 319.7211: 670–674.

          Save Citation »Export Citation »E-mail Citation »

          Intention to treat analysis includes outcomes data for all participants who were randomly allocated. This paper examines the extent to which intention to treat analysis is met in experimental trials and discusses implications for drawing generalizable conclusions from studies that fall short of intention to treat.

          Find this resource:

        • Puffer, Suezann, David Torgerson, and Judith Watson. 2003. Evidence for risk of bias in cluster randomised trials: Review of recent trials published in three general medical journals. British Medical Journal 327.7418: 785–789.

          DOI: 10.1136/bmj.327.7418.785Save Citation »Export Citation »E-mail Citation »

          This paper discusses evidence and implications of bias specifically for experimental designs that randomly assign groups of participants rather than individuals.

          Find this resource:

        • Schulz, Kenneth F., Iain Chalmers, Richard J. Hayes, and Douglas G. Altman. 1995. Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. Journal of the American Medical Association 273.5: 408–412.

          DOI: 10.1001/jama.273.5.408Save Citation »Export Citation »E-mail Citation »

          This study provides evidence from an analysis of methodological quality of 250 experimental trials that poor methodological quality of studies, particularly with respect to allocation concealment, is associated with biased results.

          Find this resource:

        Statistical Principles and Analysis

        Statistical aspects of experimental design are particularly important in studying the value of randomization, determining adequate sample size, and analyzing data. For the novice, Field 2009 is an excellent introduction to all aspects of statistics for experimental design; experienced researchers will also benefit from the text as a handbook for reference. Winer, et al. 1991 constitutes a comprehensive reference as well. Gonzalez 2009 and Rubin 2009 are well-rounded guides to statistical issues and concepts associated with evaluation. More specifically, Kirby, et al. 2002 introduces readers to the importance of sample size calculation in experimental trials and the main components involved in calculating sample size for trials. Brookes, et al. 2004 and Rothwell 2005 deal with the advantages and cautions associated with subgroup analysis, which can be used to examine how different participant characteristics or intervention conditions might predict varying outcomes.

        • Brookes, Sara T., Elise Whitely, Matthias Egger, George D. Smith, Paul A. Mulheran, and Tim J. Peters. 2004. Subgroup analyses in randomized trials: Risks of subgroup-specific analyses; Power and sample size for the interaction test. Journal of Clinical Epidemiology 57.3: 229–236.

          DOI: 10.1016/j.jclinepi.2003.08.009Save Citation »Export Citation »E-mail Citation »

          This paper highlights the inappropriate applications of subgroup analyses with insufficient sample sizes.

          Find this resource:

        • Field, Andy. 2009. Discovering statistics using SPSS. 3d ed. London: SAGE.

          Save Citation »Export Citation »E-mail Citation »

          This is an easy-to-use guide to conducting statistical operations with all aspects of experimental designs. It is useful for all levels of students and researchers. It provides good supplementary web-based materials and exercises for PASW/SPSS (Predictive Analytics Software/Statistical Package for the Social Sciences)

          Find this resource:

        • Gonzalez, Richard. 2009. Data analysis for experimental design. New York: Guilford.

          Save Citation »Export Citation »E-mail Citation »

          Gonzalez provides a comprehensive text on statistical issues related to experimental design for the behavioral sciences. It is appropriate for upper-level undergraduates or graduate students who have had some introductory statistics. Exercises and questions are provided at the end of each chapter that would be of use to students and teachers.

          Find this resource:

        • Kirby, Adrienne, Val Gebski, and Anthony C. Keech. 2002. Determining the sample size in a clinical trial. Medical Journal of Australia 177.5: 256–257.

          Save Citation »Export Citation »E-mail Citation »

          This article gives a brief introduction to the importance of sample size calculations for experimental studies and outlines the main components involved in such sample size calculations.

          Find this resource:

        • Rothwell, Peter M. 2005. Subgroup analysis in randomised controlled trials: Importance, indications, and interpretation. Lancet 365.9454: 176–186.

          DOI: 10.1016/S0140-6736(05)17709-5Save Citation »Export Citation »E-mail Citation »

          Subgroup analysis with data from experimental trials can help investigators better understand relationships between different conditions or participant characteristics and outcomes. Rothwell presents the uses of subgroup analysis and discusses how to properly conduct subgroup analysis (and prepare a trial prospectively for subgroup analysis).

          Find this resource:

        • Rubin, Allen. 2009. Statistics for evidence-based practice and evaluation. Belmont, CA: Brooks Cole.

          Save Citation »Export Citation »E-mail Citation »

          Rubin’s text is an accessible guide for students on statistical issues related to experimental studies as well as processes that analyze data from multiple trials together. This edition provides additional guidance on reporting statistics, practice illustrations, and boxes to highlight statistical formulas.

          Find this resource:

        • Winer, Benjamin J., Donald R. Brown, and Kenneth M. Michels. 1991. Statistical principles in experimental design. 3d ed. New York: McGraw-Hill.

          Save Citation »Export Citation »E-mail Citation »

          This text provides a reference for a wide range of statistical issues related to experimental design, if dry.

          Find this resource:

        Cluster-Based Experiments

        Cluster-based experiments are an increasingly popular approach for studying intervention effects. Randomizing intact groups can help navigate political and ethical concerns with experimental trials. Cluster designs can also minimize “contamination effects” occurring when subjects of intervention and control groups influence each other, which can bias the results. Cluster-based experiments, however, also have important implications for statistical power, analysis, and reporting, and the technique is not always appropriately employed. It is advisable that students, researchers, and practitioners first familiarize themselves with conventional experimental design for randomizing individuals (see Introductory Works, Textbooks, and Manuals and Guides) as a foundation for understanding the differences and issues involved with cluster-based experiments. Readers interested in a comprehensive reference on cluster-based experiments should see Murray 1998. Donner 1998 and Murray, et al. 2004 provide more summarized overviews of the features and common issues associated with cluster-based trials, such as the trade-off between precision and cost, as addressed in Flynn, et al. 2002. Cluster-based experiments are known to reduce statistical power in trials; Campbell, et al. 2004 and Donner 1992 provide tools and approaches for addressing this challenge. Torgerson 2001 represents a debate perspective, which posits that the advantages of cluster-based trials are exaggerated relative to their drawbacks and that their substitution for individual-based trials should be reconsidered.

        • Campbell, Marion K., Sean Thomson, Craig R. Ramsay, Graham S. MacLennan, and Jeremy M. Grimshaw. 2004. Sample size calculator for cluster randomized trials. Computers in Biology and Medicine 34.2: 113–125.

          DOI: 10.1016/S0010-4825(03)00039-8Save Citation »Export Citation »E-mail Citation »

          This sample size calculator can be a practical tool for investigators preparing to conduct a cluster-based experiment.

          Find this resource:

        • Donner, Allan. 1992. Sample size requirements for stratified cluster randomization designs. Statistics in Medicine 11.6: 743–750.

          DOI: 10.1002/sim.4780110605Save Citation »Export Citation »E-mail Citation »

          Given the smaller number of units typically randomized in cluster-based experiments, stratification can help minimize observable baseline differences between intervention and control groups. This technical paper suggests guidelines for sample size in cases of stratified cluster-based experiments.

          Find this resource:

        • Donner, Allan. 1998. Some aspects of the design and analysis of cluster randomization trials. Journal of the Royal Statistical Society, Series C, Applied Statistics 47.1: 95–113.

          Save Citation »Export Citation »E-mail Citation »

          This article addresses a number of commonly discussed issues for cluster-based experiments, including application, ethics, sample size, and analysis.

          Find this resource:

        • Flynn, Terry N., Elise Whitley, and Tim J. Peters. 2002. Recruitment strategies in a cluster randomized trial: Cost implications. Statistics in Medicine 21.3: 397–405.

          DOI: 10.1002/sim.1025Save Citation »Export Citation »E-mail Citation »

          This study makes a unique contribution to the literature on cluster-based experiments by examining trade-offs between statistical power and cost management. The paper could guide investigators in appropriate considerations of research financing and quality as they prepare to conduct cluster-based trials.

          Find this resource:

        • Murray, David M. 1998. Design and analysis of group-randomized trials. New York: Oxford Univ. Press.

          Save Citation »Export Citation »E-mail Citation »

          This is one of the few comprehensive texts on cluster-based experiments attending to the many complex issues of these challenging interventions. The example-based approach makes this book reasonably accessible in dealing with regression analyses and the many different design questions that these trials present.

          Find this resource:

        • Murray, David M., S. P. Varnell, and J. L. Blitstein. 2004. Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health 94.3: 423–432.

          DOI: 10.2105/AJPH.94.3.423Save Citation »Export Citation »E-mail Citation »

          This paper provides a useful update and review for readers interested in the early 21st-century design and analysis issues involved with cluster-based experiments.

          Find this resource:

        • Torgerson, David J. 2001. Contamination in trials: Is cluster randomisation the answer? British Medical Journal 322.7282: 355–357.

          DOI: 10.1136/bmj.322.7282.355Save Citation »Export Citation »E-mail Citation »

          Torgerson presents a skeptical view of the use of cluster-based trials over randomly assigning individuals, suggesting that the drawbacks of cluster-based trials are not worth the gains in attempting to deal with contamination effects, a commonly cited reason for using cluster-based designs.

          Find this resource:

        Ethical Considerations

        Any experimental research dealing directly with human subjects has ethical implications, and experimental design is no different. An accessible introduction to ethical issues is provided by the Ethox Centre, which supports a regularly updated online resource with useful information concerning various aspects of research ethics. Moreno 1999 provides a general overview, rooted in both history and late-20th-century discourse, of some of the major ethical considerations underlying experimental design. Walker, et al. 2008 draws attention to informed consent and the inadequacy of early 21st-century practices for informed consent in experimental trials. Montori, et al. 2005 questions the tendencies and reporting of stopping experimental trials early due to belief of benefit. Edwards, et al. 1999 deals specifically with ethical considerations related to cluster-based experiments.

        • Edwards, Sarah J. L., David A. Braunholtz, Richard J. Lilford, and Andrew J. Stevens. 1999. Ethical issues in the design and conduct of cluster randomised controlled trials. British Medical Journal 318.7195: 1407–1409.

          Save Citation »Export Citation »E-mail Citation »

          This article raises ethical issues specific to cluster-based experiments, such as the kinds of cluster-based trials in which informed consent should be sought and those in which special safeguards need to be instituted when individual informed consent is not realistic.

          Find this resource:

        • Ethox Centre.

          Save Citation »Export Citation »E-mail Citation »

          The Ethox Centre constitutes an active and regularly updated resource for ethical considerations in health-related research globally. Researchers may consider this resource useful for experimental design in social welfare fields.

          Find this resource:

          • Montori, Victor M., P. J. Devereaux, Neill K. J. Adhikari, Karen E. A. Burns, Christoph H. Eggert, Matthias Briel, Christina Lacchetti, Teresa W. Leung, Elizabeth Darling, Dianne M. Bryant, Heiner C. Bucher, Holger J. Schünemann, Maureen O. Meade, Deborah J. Cook, Patricia J. Erwin, Amit Sood, Richa Sood, Benjamin Lo, Carly A. Thompson, Qi Zhou, Edward Mills, and Gordon H. Guyatt. 2005. Randomized trials stopped early for benefit: A systematic review. Journal of the American Medical Association 294.17: 2203–2209.

            DOI: 10.1001/jama.294.17.2203Save Citation »Export Citation »E-mail Citation »

            This study analyzes data from 143 experimental trials that were stopped early for belief of intervention benefit to participants. Findings suggest that many trials that are stopped early for ethical reasons report inadequate information about the decision to do so, show questionably large effect sizes, and should, according to the authors, be interpreted cautiously.

            Find this resource:

          • Moreno, Jonathan D. 1999. Ethics of research design. Accountability in Research: Policies and Quality Assurance 7.2–4: 175–182.

            Save Citation »Export Citation »E-mail Citation »

            With emphasis on experimental trials in psychiatric research, this paper summarizes a range of ethical issues in the design of clinical research.

            Find this resource:

          • Walker, Robert, Lesley Hoggart, and Gayle Hamilton. 2008. Random assignment and informed consent: A case study of multiple perspectives. American Journal of Evaluation 29.2: 156–174.

            DOI: 10.1177/1098214008317206Save Citation »Export Citation »E-mail Citation »

            Interviews with participants in a large experimental trial of a United Kingdom–based social intervention found that, in spite of careful attention to proper informed consent procedures, many participants still lacked understanding of the experiment and their involvement. Implications are discussed.

            Find this resource:

          Reporting Experiments

          In order for experimental trials to be appropriately appraised and applied, it is critical that methods and results be adequately reported. The first stop for those interested in properly reporting and appraising studies generally ought to be the Equator Network, as it collects many of the most updated resources on the topic, including the following consensus papers. Several papers and systematic reviews have addressed needs and recommendations for explicit reporting of experiments; these have largely culminated with the CONSORT (Consolidated Standards of Reporting Trials) (Schulz, et al. 2010) and TREND (Transparent Reporting of Evaluations with Nonrandomised Designs) (Des Jarlais, et al. 2004) statements, which are widely referenced for reporting standards of experimental and quasi-experimental trials. Moher, et al. 1994 deals specifically with reporting statistical power and sample size. Mayo-Wilson 2007 supplements the CONSORT statement with direction for reporting intervention implementation in papers on randomized controlled trials, and Campbell, et al. 2004 supplements it to include cluster-based trials.

          • Campbell, Marion K., Diana R. Elbourne, and Douglas G. Altman. 2004. CONSORT statement: Extension to cluster randomised trials. British Medical Journal 328.7441: 702–708.

            DOI: 10.1136/bmj.328.7441.702Save Citation »Export Citation »E-mail Citation »

            This extension to the CONSORT (Consolidated Standards of Reporting Trials) statement responds to the additional reporting needs for cluster-based experiments that were often inadequately addressed in study write-ups.

            Find this resource:

          • Des Jarlais, Don C., Cynthia Lyles, Nicole Crepaz, and the TREND Group. 2004. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health 94.3: 361–366.

            DOI: 10.2105/AJPH.94.3.361Save Citation »Export Citation »E-mail Citation »

            Like the CONSORT (Consolidated Standards of Reporting Trials) statement, the TREND (Transparent Reporting of Evaluations with Nonrandomised Designs) statement constitutes a well-reputed guide for properly reporting trials that was developed by a range of professionals, but TREND focuses on reporting nonrandomized trials.

            Find this resource:

          • Equator Network.

            Save Citation »Export Citation »E-mail Citation »

            This website compiles the most updated versions of many key consensus papers on research reporting, including the CONSORT (Consolidated Standards of Reporting Trials) and TREND (Transparent Reporting of Evaluations with Nonrandomised Designs) statements. Researchers are advised to consult first with this resource on the subject of reporting. On the website, visit the Resource Centre page.

            Find this resource:

            • Mayo-Wilson, Evan. 2007. Reporting implementation in randomized trials: Proposed additions to the consolidated standards of reporting trials statement. American Journal of Public Health 97.4: 630–633.

              DOI: 10.2105/AJPH.2006.094169Save Citation »Export Citation »E-mail Citation »

              Mayo-Wilson makes an important contribution to the literature on reporting in experimental design by making recommendations for reporting intervention implementation in trial write-ups.

              Find this resource:

            • Moher, David, Corinne S. Dulberg, and George A. Wells. 1994. Statistical power, sample size, and their reporting in randomized controlled trials. Journal of the American Medical Association 272.2: 122–124.

              DOI: 10.1001/jama.272.2.122Save Citation »Export Citation »E-mail Citation »

              This article discusses the importance of reporting statistical power and sample size in particular and addresses the implications of studies that fail to discuss whether observed differences between groups are statistically important.

              Find this resource:

            • Schulz, Kenneth F., Douglas G. Altman, and David Moher. 2010. CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMC Medicine 8.18.

              Save Citation »Export Citation »E-mail Citation »

              This is the early 21st-century version of a frequently cited and consulted expert consensus statement providing details for explicit reporting standards of experimental trials. A complete version of this article is available online.

              Find this resource:

            Debate on Experimental Design

            The growing discourse and promotion of experimental design in evaluative research has also prompted substantial debate. Some of the challenges to experimental design have been rooted in misunderstandings or overgeneralizations, as demonstrated in European Evaluation Society 2007. Black 1996 discusses the strengths of different observational methods relative to experimental methods. In a much quoted debate, Hammersley 2005 and, in response, Chalmers 2005 discuss aspects of the merits of certain experimental methods and their relevance in the social work field. More philosophically, Worrall 2007 provides an interesting, sophisticated debate on the logic underpinning randomization that might be read in conjunction with Smith and Pell 2003, which illustrates an overreliance on randomized trials.

            Teaching

            Lenth 2002 presents tips and scenarios for instructors in teaching different aspects of experimental design through interactive methods. Shlonsky and Stern 2007 and REACH-SW are useful resources for teaching evidence-based practice generally, which can provide an appropriate context for teaching experimental design. Gibbs, et al. 2007 and the Evidence Based Practice and Policy Online Resource Training Center refer readers to websites that provide introductions to evidence-based practice in social work–related fields, and the Evidence Based Practice and Policy Online Resource Training Center in particular provides a series of educational presentations and handouts available free online. Straus, et al. 2005 provides well-developed support specifically for teaching students in the area of medicine and health, though the focus here is on randomized controlled trials and systematic reviews.

            • Evidence Based Practice and Policy Online Resource Training Center.

              Save Citation »Export Citation »E-mail Citation »

              This website provides a number of training and teaching resources on evidence-based practice for social work generally, including presentations and handouts available free online as well as references to other websites providing evidence-based practice resources and to clearinghouses holding evaluations pertinent to social work practitioners and researchers.

              Find this resource:

              • Gibbs, Leonard, Eamon C. Armstrong, Donna Raleigh, and Josette Jones. 2007. Evidence-Based Practice for the Helping Professions.

                Save Citation »Export Citation »E-mail Citation »

                This website is a general resource for social workers on evidence-based practice. Readers of this bibliography might find the COPES Questions page helpful in developing an effective research question in planning an experimental study.

                Find this resource:

                • Lenth, Russell V. 2002. Using simulations in teaching experimental design. Iowa City: Univ. of Iowa.

                  Save Citation »Export Citation »E-mail Citation »

                  Available free online, this document prepared by an instructor offers a range of techniques and scenarios as tips to support instructors in teaching students about different aspects of experimental design through interactive methods.

                  Find this resource:

                • REACH-SW. Danya International.

                  Save Citation »Export Citation »E-mail Citation »

                  REACH-SW is a commercial service that provides CD-ROM teaching packages to aid in teaching evidence-based practice to students at various levels. Danya International also offers training and technical assistance services for teachers, researchers, students, and practitioners.

                  Find this resource:

                  • Shlonsky, Aron, and Susan B. Stern. 2007. Reflections on the teaching of evidence-based practice. Research on Social Work Practice 17.5: 603–611.

                    DOI: 10.1177/1049731507301527Save Citation »Export Citation »E-mail Citation »

                    This article does not speak directly to teaching experimental design, but it does provide advice to instructors of evidence-based practice in general. In many cases, it will be appropriate to teach experimental design within an evidence-based framework like the one illustrated by Shlonsky and Stern.

                    Find this resource:

                  • Straus, Sharon E., W. Scott Richardson, Paul Glasziou, and R. Brian Haynes. 2005. Evidence-based medicine: How to practice and teach EBM. 3d ed. Edinburgh: Churchill Livingstone.

                    Save Citation »Export Citation »E-mail Citation »

                    This book is an edition of a major text in the development of the evidence-based health care field. The text includes a computer CD with PowerPoint editable presentations, a complementary website with teaching tips and critical appraisal tools, and text specifically addressing issues of experimental design on pages 117–143.

                    Find this resource:

                  LAST MODIFIED: 06/29/2011

                  DOI: 10.1093/OBO/9780195389678-0053

                  back to top

                  Article

                  Up

                  Down