Public Health Randomized Controlled Trials
by
Christine Paul, Robert Sanson-Fisher, Mariko Carey
  • LAST REVIEWED: 15 June 2015
  • LAST MODIFIED: 31 August 2015
  • DOI: 10.1093/obo/9780199756797-0047

Introduction

Randomized controlled trials (RCTs) have become the accepted approach, or “gold standard,” for producing robust evidence to guide decisions about the effectiveness of proposed public health interventions, programs, or practices. While other research designs are in common use, the “levels of evidence” concept clearly places RCTs as the peak or most preferred approach to the production of evidence. The historical roots of RCTs lie in recognition of the important information to be gained from having comparison or control groups. However, attempts to translate the clinical framework underlying the classical RCT design into the complex public health context have illuminated the limitations of RCTs as a tool for public health evaluation. This conundrum has also been associated with a rise in the use and sophistication of variants on the RCT design such as cluster randomization. These designs bring with them additional challenges that must be considered at trial development. A thorough understanding of the ethical issues and key criteria for methodological rigor is also key to a best-practice approach to understanding, designing, and reporting RCTs in the public health context.

Texts and Reference Works

Works such as Bulpitt 1996 and Matthews 2006 describe the principles of randomized controlled trials (RCTs) in the clinical setting, much of which is directly relevant to ensuring that RCTs in the public health arena are designed to provide the greatest possible statistical power. Torgerson and Torgerson 2008 addresses the complexities and additional challenges to rigor that often occur in conducting RCTs in the public health context. Hayes and Moulton 2009 provides a thorough exploration of issues related to cluster randomized trials, which are increasingly being used in health contexts where complete randomization is not appropriate.

  • Bulpitt, C. J. 1996. Randomised controlled clinical trials. 2d ed. Dordrecht, The Netherlands: Kluwer.

    DOI: 10.1007/978-1-4615-6347-1Save Citation »Export Citation »E-mail Citation »

    This older text provides a comprehensive background to the history and ethics of conducting RCTs, with a focus on clinical trials. Various types of RCT design options are described along with issues relating to sampling, recruitment, avoiding bias, cost-effectiveness, and the advantages and disadvantages of RCTs.

    Find this resource:

    • Hayes, R. J., and L. H. Moulton. 2009. Cluster randomised trials. Interdisciplinary Statistics. Boca Raton, FL: CRC Press.

      DOI: 10.1201/9781584888178Save Citation »Export Citation »E-mail Citation »

      Important concepts (including conditions when cluster designs are appropriate), design issues, and analysis issues for cluster randomized trials are addressed in this text. A range of key issues are explored, including variability between clusters, choice of clusters, matching and stratification, randomization procedures, sample size, and alternative designs. This text provides many examples and detailed instructions regarding statistical analysis.

      Find this resource:

      • Matthews, J. N. S. 2006. Introduction to randomized controlled clinical trials. Texts in Statistical Science. Boca Raton, FL: CRC Press.

        DOI: 10.1201/9781420011302Save Citation »Export Citation »E-mail Citation »

        Matthews provides a clear introduction to the RCT from a clinical perspective, describing key features and discussing issues of bias and methods of randomization, along with issues regarding the analysis of sample-size calculation. A strong focus is on issues key to clinical drug trials, such as blinding, placebos, and protocol violations.

        Find this resource:

        • Torgerson, D. J., and C. Torgerson. 2008. Designing randomised trials in health, education, and the social sciences: An introduction. New York: Palgrave Macmillan.

          DOI: 10.1057/9780230583993Save Citation »Export Citation »E-mail Citation »

          A range of research designs is covered, including RCTs, from a social sciences rather than clinical perspective. Chapters address topics such as placebo designs, sources of bias, cluster randomized trials, preference approaches, unequal allocation, sample-size issues, and the importance of assessing economic costs. Analytical approaches are addressed but are not a major focus of the text.

          Find this resource:

          Levels of Evidence

          A number of countries such as Canada (Canadian Task Force on the Periodic Health Examination 1979), the United States (Harris, et al. 2001), and Australia (National Health and Medical Research Council 2009) have produced frameworks describing the placement of data from randomized controlled trials (RCTs) within a hierarchy of evidence. More-recent approaches to the grading of evidence continue to give high priority to RCT designs (Oxford Centre for Evidence-based Medicine 2011; Guyatt, et al. 2008). The Cochrane Collaboration is often considered to provide a gold standard in evidence synthesis, and its approach is described in Higgins and Green 2011.

          • Canadian Task Force on the Periodic Health Examination. 1979. The Periodic Health Examination. Canadian Medical Association Journal 121.9: 1193–1254.

            Save Citation »Export Citation »E-mail Citation »

            The relative importance or weight of RCT designs is illustrated in the Canadian Task Force report on the Periodic Health Examination. The report rates the strength of the evidence for each health condition examined, by using a hierarchy of “levels of evidence.”

            Find this resource:

            • Guyatt, G. H., A. D. Oxman, R. Kunz, G. E. Vist, Y. Falck-Ytter, and H. J. Schünemann. 2008. GRADE: What is “quality of evidence” and why is it important to clinicians? British Medical Journal 336.7651: 995–998.

              DOI: 10.1136/bmj.39490.551019.BESave Citation »Export Citation »E-mail Citation »

              This article details the classification system developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group. This system has parallels to the “levels of evidence” concept, with an additional focus on data relating to patient decision making. Available online through registration or by subscription.

              Find this resource:

              • Harris, R. P., M. Helfand, S. H. Woolf, et al. 2001. Current methods of the U.S. Preventive Services Task Force: A review of the process. American Journal of Preventive Medicine 20.3.S1: 21–35.

                DOI: 10.1016/S0749-3797(01)00261-6Save Citation »Export Citation »E-mail Citation »

                A levels-of-evidence approach is used by the US Preventive Services Task Force (USPSTF) to advise on the usefulness of a number of preventive health services. The description of the methods used to develop the USPSTF recommendations illustrates the role of RCTs as a tool for producing reliable evidence. Three levels or “strata” of analysis are considered. Available online for a fee.

                Find this resource:

                • Higgins, J. P. T., and S. Green, eds. 2011. Cochrane handbook for systematic reviews of interventions Version 5.1.0. London: Cochrane Collaboration.

                  Save Citation »Export Citation »E-mail Citation »

                  The Cochrane Handbook is designed to be used by reviewers to conduct a systematic review of intervention literature. It outlines the rationale, steps, and criteria for reviewing, including assessment of the quality of study design. The handbook can also be used as a general guide for literature reviewing and for robust study design.

                  Find this resource:

                  • National Health and Medical Research Council. 2009. NHMRC additional levels of evidence and grades for recommendations for developers of guidelines. Canberra, Australia: National Health and Medical Research Council.

                    Save Citation »Export Citation »E-mail Citation »

                    The hierarchy of study designs endorsed by the National Health and Medical Research Council (NHMRC) in Australia follows a pattern similar to that used by other countries. This document also provides more general guidance on clinical impact, generalizability to the target population, and applicability to the Australian context. An expanded set of evidence levels has been developed for guidelines on diagnosis, prognosis, etiology and screening.

                    Find this resource:

                    • Oxford Centre for Evidence-Based Medicine. 2011. The 2011 Oxford CEBM levels of evidence. Oxford: Centre for Evidence-Based Medicine.

                      Save Citation »Export Citation »E-mail Citation »

                      This document tabulates the definitions used by the Centre for Evidence-based Medicine, including those for RCTs. Levels of evidence for various issues such as prognosis and therapy are described.

                      Find this resource:

                      Historical Perspectives

                      The randomized controlled trial (RCT) design, although long recognized as the “gold standard” for producing robust evidence, took some time to be developed and accepted. Bulpitt 1996 traces the early development of the RCT to its acceptance as the gold standard for producing sound evidence of the effectiveness of an intervention. Torgerson and Torgerson 2008 provides an additional description of the history of the RCT, extending its use into the public health arena and the associated challenges of using RCTs in such a context. Boruch, et al. 1978 provides an alternative view, detailing how RCTs can be used in a wide variety of settings.

                      • Boruch, R. F., A. J. McSweeney, and E. J. Soderstrom. 1978. Randomised field experiments for program planning, development, and evaluation: An illustrative bibliography. Evaluation Review 2.4: 655–695.

                        DOI: 10.1177/0193841X7800200411Save Citation »Export Citation »E-mail Citation »

                        Boruch and colleagues counter the view that RCTs are not broadly appropriate or feasible, by documenting in bibliographic text over three hundred field experiments conducted across a broad range of settings from the 1940s to the 1970s. A subsequent paper by Boruch and colleagues, “Randomized Experiments for Evaluating and Planning Local Programs: A Summary on Appropriateness and Feasibility” (Public Administration Review 39.1 [1979]: 36–40), summarizes the concepts.

                        Find this resource:

                        • Bulpitt, C. J. 1996. Randomised controlled clinical trials. 2d ed. Dordrecht, The Netherlands: Kluwer.

                          DOI: 10.1007/978-1-4615-6347-1Save Citation »Export Citation »E-mail Citation »

                          This text describes the very early formal RCTs conducted in the 1940s to carry out trials on the use of streptomycin. Bulpitt also traces examples of the earlier roots of the use of comparison or control groups (sometimes unintentionally), the use of placebos, the use of probability theory, and the emergence of standardized approaches to observation and recording of outcomes.

                          Find this resource:

                          • Torgerson, D. J., and C. Torgerson. 2008. Designing randomised trials in health, education, and the social sciences: An introduction. New York: Palgrave Macmillan.

                            DOI: 10.1057/9780230583993Save Citation »Export Citation »E-mail Citation »

                            The book describes a range of research designs, including RCTs, from a social sciences perspective rather than clinical perspective. The history of controlled trials includes discussion of the use of RCTs in trials of education methods, prior to and parallel with the emergence of RCTs in clinical medicine.

                            Find this resource:

                            Ethics of Use

                            Ethical principles for the conduct of research, such as those developed in the Declaration of Helsinki, formed the basis of more-detailed documents such as the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research 1979), which describes how randomized controlled trials (RCTs) should be conducted to protect participants. Taljaard, et al. 2009 describes additional ethical issues that arise when cluster RCTs are used. Lilford and Jackson 1995; Ellis, et al. 2002; Kim, et al. 2004; and Nardini, et al. 2014 provide data and commentary on important debates relating to ethical principles such as equipoise, informed consent, and conflicts of interest. Nardini, et al. 2014 discusses the ethics of RCTs in oncology in particular. Henschel, et al. 2010 highlights the additional requirements for ethical treatment of children when planning RCTs.

                            • Ellis, P. M., P. Butow, and M. H. N. Tattersall. 2002. Informing breast cancer patients about clinical trials: A randomized clinical trial of an educational booklet. Annals of Oncology 13.9: 1414–1423.

                              DOI: 10.1093/annonc/mdf255Save Citation »Export Citation »E-mail Citation »

                              Ellis and colleagues randomized women undergoing surgery to receive, or not, an information booklet explaining randomized trials. Women who received the educational booklet were significantly less likely to consider trial participation. The study explores some of the ethical implications for communication of information about RCTs. Available online for subscribers.

                              Find this resource:

                              • Henschel, A. D., L. G. Rothenberger, and J. Boos. 2010. Randomized clinical trials in children—ethical and methodological issues. Current Pharmaceutical Design 16.22: 2407–2415.

                                DOI: 10.2174/138161210791959854Save Citation »Export Citation »E-mail Citation »

                                This review highlights the additional requirements for the ethical treatment of children when planning RCTs. Children should be exposed only to effective and safe drug treatments. This review explores feasibility, informed consent, and the importance of obtaining equipoise when designing pediatric RCTs. online by subscription.

                                Find this resource:

                                • Kim, S. Y. H., R. W. Millard, P. Nisbet, C. Cox, and E. D. Caine. 2004. Potential research participants’ views regarding researcher and institutional financial conflicts of interest. Journal of Medical Ethics 30.1: 73–79.

                                  DOI: 10.1136/jme.2002.001461Save Citation »Export Citation »E-mail Citation »

                                  This journal article describes the views of potential research participants about the importance of being informed about researchers’ conflicts of interest. The situations under which conflicts of interest arise are noted (e.g., via testing of financial or other interest in the intervention). Available online.

                                  Find this resource:

                                  • Lilford, R. J., and J. Jackson. 1995. Equipoise and the ethics of randomization. Journal of the Royal Society of Medicine 88.10:552–559.

                                    Save Citation »Export Citation »E-mail Citation »

                                    Lilford and Jackson describe the importance of ethical concepts such as equipoise, a situation that occurs when there is no known preference between the treatment options. The paper explores whether randomization creates a tension between the obligation to provide participants with the best option and the obligation of society to facilitate research aimed at improving medical treatment. Available online.

                                    Find this resource:

                                    • Nardini, C. 2014. The ethics of clinical trials. Ecancermedicalscience 8:387.

                                      Save Citation »Export Citation »E-mail Citation »

                                      An up-to-date review discussing the ethical issues surrounding RCTs, in the context of oncology in particular. The review discusses participation and informed consent, deception surrounding the use of placebos, and the problem of striking a balance between exploitation and overprotection of a patient. Available online.

                                      Find this resource:

                                      • National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. DHEW Pub. No. (OS) 78-0012 Washington, DC: US Government Printing Office.

                                        Save Citation »Export Citation »E-mail Citation »

                                        In 1974 the US government created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report summarizes the ethical principles identified by the Commission. These include respect for persons, beneficence, justice, informed consent, assessing risks versus benefits, and appropriate participant selection. Available online.

                                        Find this resource:

                                        • Taljaard, M., C. Weijer, J. M. Grimshaw, et al. 2009. Ethical and policy issues in cluster randomized trials: Rationale and design of mixed methods research study. BioMed Central Trials 10.1: 61.

                                          DOI: 10.1186/1745-6215-10-61Save Citation »Export Citation »E-mail Citation »

                                          Taljaard and colleagues outline ethical issues raised by randomizing groups, including whether it is possible to gain whole-group informed consent in a range of settings such as communities, hospitals, and developing communities. A planned process for using mixed-methods research to produce appropriate guidelines is described.

                                          Find this resource:

                                          • Declaration of Helsinki. World Medical Association.

                                            Save Citation »Export Citation »E-mail Citation »

                                            The Declaration of Helsinki is the policy statement of the World Medical Association’s (WMA’s) that was adopted in 1964 and was amended most recently at the WMA General Assembly in October 2013. Basic principles and processes are outlined. The WMA website also contains links to a number of relevant papers on the topic.

                                            Find this resource:

                                            Randomization and Control Groups

                                            The basis of the randomized controlled trial (RCT) design is the use of randomization and control groups—that is, randomly allocating individuals or groups to either receive or not receive an intervention. The basic mechanics of this process are discussed in Matthews 2006, while Torgerson and Torgerson 2008 describes how this might be applied in a social science context. Some of the implications of not using an RCT design are addressed in McKee, et al. 1999, while Jadad and Enkin 2007 addresses the idea that not all RCTs are equal. Types of Designs describes varying types of RCTs.

                                            • Jadad, A. R., and M. W. Enkin. 2007. Assessing the quality of randomized controlled trials: Why, what, how, and by whom? In Randomized controlled trials: Questions, answers, and musings. 2d ed. By A. R. Jadad and M. W. Enkin, 48–61. Malden, MA: Blackwell.

                                              Save Citation »Export Citation »E-mail Citation »

                                              This is a brief general introduction to the idea that methodological quality influences the findings and interpretation of RCTs. The role of randomization and use of control groups as a means to achieve internal validity are introduced.

                                              Find this resource:

                                              • Matthews, J. N. S. 2006. Introduction to randomized controlled clinical trials. 2d ed. Texts in Statistical Science. Boca Raton, FL: CRC Press.

                                                DOI: 10.1201/9781420011302Save Citation »Export Citation »E-mail Citation »

                                                Matthews provides a clear introduction to the RCT from a clinical perspective, describing the basics of randomization and the importance of control groups. The text has a strong clinical focus.

                                                Find this resource:

                                                • McKee, M., A. Britton, N. Black, K. McPherson, C. Sanderson, and C. Bain. 1999. Interpreting the evidence: Choosing between randomised and non-randomised studies. British Medical Journal 319.7205: 312–315.

                                                  DOI: 10.1136/bmj.319.7205.312Save Citation »Export Citation »E-mail Citation »

                                                  Presents a summary of a more detailed systematic review exploring the internal and external threats to validity that can occur during the use of randomization. The article explains the threats, their implications, and some potential solutions for each threat. Available online through registration or by subscription.

                                                  Find this resource:

                                                  • Torgerson, D. J., and C. Torgerson. 2008. Designing randomized trials in health, education, and the social sciences: An introduction. New York: Palgrave Macmillan.

                                                    DOI: 10.1057/9780230583993Save Citation »Export Citation »E-mail Citation »

                                                    The book describes a range of research designs including RCTs, from a social sciences rather than clinical perspective. The value of randomization and use of control groups is described.

                                                    Find this resource:

                                                    Types of Designs

                                                    The constraints of simple randomization have given rise to a range of additional design approaches, such as those outlined in Matthews 2006. Various authors describe in more detail issues associated with the various types of designs, including stratified randomization in Kernan, et al. 1999; factorial designs and cluster randomized designs in Puffer, et al. 2005 and Hayes and Moulton 2009; crossover trials in Jones and Kenward 2003; and the Zelen design in Adamson, et al. 2006. Scott, et al. 2002 describes the minimization design that may not necessarily be considered a true randomized controlled trial (RCT) design. The n-of-1 RCT is used in individual patients where a large-scale RCT may not be possible. The method is described in Guyatt, et al. 1988. The same author also released an n-of-1 RCT guide for clinicians.

                                                    • Adamson, J., S. Cockayne, S. Puffer, and D. J. Torgerson. 2006. Review of randomised trials using the post-randomised consent (Zelen’s) design. Contemporary Clinical Trials 27.4: 305–319.

                                                      DOI: 10.1016/j.cct.2005.11.003Save Citation »Export Citation »E-mail Citation »

                                                      The post-randomization consent design was proposed by Marvin Zelen in 1979 to maximize recruitment by seeking consent only from those in the intervention arm. While it has remained controversial, Adamson and colleagues identified fifty-eight trials by using the method. The review explores reasons for the use of the method and the degree of experimental arm crossover that occurred. Available online for a fee.

                                                      Find this resource:

                                                      • Guyatt, G., D. Sackett, J. Adachi, et al. 1988. A clinician’s guide for conducting randomized trials in individual patients. Canadian Medical Association Journal 139.6: 497–503.

                                                        Save Citation »Export Citation »E-mail Citation »

                                                        The n-of-1 RCT design proposed by Guyatt expands on single case studies and single-subject research. The n-of-1 RCT is used for individual patients for clinical disorders where a large RCT may not be possible. Described earlier by the same author and his colleagues in “Determining Optimal Therapy—Randomized Trials in Individual Patients” (New England Journal of Medicine 314.14 [1986]: 889–892). The n-of-1 RCT design uses a double-blind cross-over randomized method for individual patients. The patient receives a combination of treatment and placebo / no treatment in a random sequence over time. This paper provides a comprehensive guide for conducting an n-of-1 RCT. Available online.

                                                        Find this resource:

                                                        • Hayes, R. J., and L. H. Moulton. 2009. Cluster randomised trials. Interdisciplinary Statistics. Boca Raton, FL: CRC Press.

                                                          DOI: 10.1201/9781584888178Save Citation »Export Citation »E-mail Citation »

                                                          Basic concepts (including conditions when cluster designs are appropriate), design issues, and analysis issues for cluster randomized trials are addressed in this text. A range of key issues are explored, including variability between clusters, choice of clusters, matching and stratification, randomization procedures, sample size, and alternative designs.

                                                          Find this resource:

                                                          • Jones, B., and M. G. Kenward. 2003. Design and analysis of crossover trials. 2d ed. Monographs on Statistics and Applied Probability 98. Boca Raton, FL: CRC Press.

                                                            Save Citation »Export Citation »E-mail Citation »

                                                            Crossover trials, where participants receive a sequence of treatments or both the intervention and control treatment, are discussed in detail in this text. Design, analysis, and interpretation are discussed.

                                                            Find this resource:

                                                            • Kernan, W. N., C. M. Viscoli, R. W. Makuch, L. M. Brass, and R. I. Horwitz. 1999. Stratified randomization for clinical trials. Journal of Clinical Epidemiology 52.1: 19–26.

                                                              DOI: 10.1016/S0895-4356(98)00138-3Save Citation »Export Citation »E-mail Citation »

                                                              Recognition that particular known factors will influence study outcomes—particularly for small trials—resulted in the use of a stratified randomization approach. Kernan and colleagues attempt to address debate about the usefulness of stratified randomization, by describing its purpose and implications and reviewing original research on stratification. Available online with subscription.

                                                              Find this resource:

                                                              • Matthews, J. N. S. 2006. Introduction to randomized controlled clinical trials. 2d ed. Texts in Statistical Science. Boca Raton, FL: CRC Press.

                                                                DOI: 10.1201/9781420011302Save Citation »Export Citation »E-mail Citation »

                                                                Matthews provides a clear introduction to the RCT from a clinical perspective, describing the basics of randomization, including simple or complete randomization, block randomization, stratification, and minimization designs. A strong focus is on issues key to clinical drug trials, such blinding, placebos, and protocol violations.

                                                                Find this resource:

                                                                • Puffer, S., D. J. Torgerson, and J. Watson. 2005. Cluster randomized controlled trials. Journal of Evaluation in Clinical Practice. 11.5: 479–483.

                                                                  DOI: 10.1111/j.1365-2753.2005.00568.xSave Citation »Export Citation »E-mail Citation »

                                                                  Cluster RCTs, in which groups or clusters are randomized, are increasingly common in public health interventions. This paper discusses the design features of a cluster trial that make it particularly vulnerable to bias. Suggestions for avoiding bias are also discussed. Available online with registration.

                                                                  Find this resource:

                                                                  • Scott, N. W., G. C. McPherson, C. R. Ramsay, and M. K. Campbell. 2002. The method of minimization for allocation to clinical trials: A review. Controlled Clinical Trials 23.6: 662–674.

                                                                    DOI: 10.1016/S0197-2456(02)00242-8Save Citation »Export Citation »E-mail Citation »

                                                                    The minimization method attempts to balance or ensure that experimental groups are as equal as possible in terms of predefined patient factors. Some may consider that the minimization method compromises the degree to which random allocation can operate to avoid unforeseen confounding effects. Scott and colleagues review the literature addressing the advantages and disadvantages of the method. Available online for a fee.

                                                                    Find this resource:

                                                                    Criteria for Methodological Rigor

                                                                    The challenges inherent in adhering to the principles of randomized controlled trial (RCT) design have resulted in the production of a range of tools and checklists against which the quality of RCTs can be thoroughly assessed. As described in Schulz, et al. 1995 and Jadad and Enkin 2007, adherence to these criteria can significantly influence trial outcomes. The meta-analysis in Schulz, et al. 1995 illustrates the potential of poor methodological quality to bias trial results. The Data Collection Checklist of the Cochrane Effective Practice and Organisation of Care Review Group (EPOC) and Higgins and Green 2011 provide a checklist for the review of trials of health interventions. A complementary tool to assess risk of bias is available from the same group. The Quality Assessment Tool for Quantitative Studies, from the Effective Public Health Practice Project (EPHPP), has been validated and is widely used for reviewing the methodological quality of public health trials. Puffer, et al. 2005 notes that such checklists often do not appropriately deal with methodological issues for cluster trials, and the authors discusses the design features of a cluster trial that make it particularly vulnerable to bias. Rychetnik 2002 discusses whether evaluative research on public health interventions can be adequately appraised by means of established criteria for judging methodological quality. A review of the various approaches to weighing the relative importance of evidence provided by different research designs is provided in West, et al. 2002. Armijo-Olivo, et al. 2012 finds that the Cochrane Collaboration Risk of Bias Tool performed differently than the EPHPP Quality Assessment Tool, in a systemic review of psychometric research.

                                                                    • Armijo-Olivo, S., C. R. Stiles, N. A. Hagen, P. D. Biondo, and G. G. Cummings. 2012. Assessment of study quality for systematic reviews: A comparison of the Cochrane Collaboration Risk of Bias Tool and the Effective Public Health Practice Project Quality Assessment Tool; Methodological research. Journal of Evaluation in Clinical Practice 18.1: 12–18.

                                                                      DOI: 10.1111/j.1365-2753.2010.01516.xSave Citation »Export Citation »E-mail Citation »

                                                                      A systemic review of the literature was conducted using each tool individually to assess methodological quality of RCTs, where psychological outcomes were measured. The review found that the two tools performed differently, and suggests a revision of the Cochrane Collaboration Risk of Bias Tool to thoroughly validate its psychometric properties.

                                                                      Find this resource:

                                                                      • Balshem, H., M. Helfand, H. J. Schünemann, et al. 2011. GRADE guidelines 3: Rating the quality of evidence. Journal of Clinical Epidemiology 64.4: 401–406.

                                                                        DOI: 10.1016/j.jclinepi.2010.07.015Save Citation »Export Citation »E-mail Citation »

                                                                        The GRADE guidelines were created to assess the quality of reported trials in the literature, in order to apply the evidence in a clinical setting. There are twenty articles available in the series. This article, number 3, describes the four-category GRADE system for categorizing trials on the basis of quality of evidence. A RCT is usually the “gold standard” of clinical research but can be rated downward if the quality of evidence is low. It also introduces guidelines for rating a trial upward or downward on the basis of quality of evidence. Available online by subscription or purchase.

                                                                        Find this resource:

                                                                        • Data Collection Checklist. Cochrane Effective Practice and Organisation of Care Review Group.

                                                                          Save Citation »Export Citation »E-mail Citation »

                                                                          The EPOC data collection checklist is designed for use by those conducting a review of literature regarding specific health-care interventions. It provides a detailed list of quality criteria that can be applied to the review (and therefore the design) of interventions using recognized research designs, including RCTs.

                                                                          Find this resource:

                                                                          • Higgins J. P. T., D. G. Altman, P. C. Gøtzsche, et al. 2011. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. British Medical Journal 343.7829: 889–893.

                                                                            Save Citation »Export Citation »E-mail Citation »

                                                                            The Cochrane Bias Methods Group and the Cochrane Statistical Methods Group have created this tool to supplement to the Cochrane Handbook (Higgins and Green 2011). It aims to make the process of assessing risk of bias clearer and more accurate. The tool is presented in an assessment table available online.

                                                                            Find this resource:

                                                                            • Higgins, J. P. T., and S. Green, eds. 2011. Cochrane handbook for systematic reviews of interventions. Version 5.1.0. London: Cochrane Collaboration.

                                                                              Save Citation »Export Citation »E-mail Citation »

                                                                              A very detailed exposition of all elements of the process for conducting a thorough systematic review of the literature. Although much of the handbook is focused on the process of literature searching, there are useful sections on rating the methodological quality of the evidence.

                                                                              Find this resource:

                                                                              • Jadad, A. R., and M. W. Enkin. 2007. Assessing the quality of randomized controlled trials: Why, what, how, and by whom? In Randomized controlled trials: Questions, answers, and musings. 2d ed. By A. R. Jadad and M. W. Enkin, 48–61. Malden, MA: Blackwell.

                                                                                Save Citation »Export Citation »E-mail Citation »

                                                                                Provides a brief general introduction to the idea that methodological quality influences the findings and interpretation of RCTs. The concepts of internal and external validity and generalizability are introduced. The construction of scales for assessing methodological criteria is discussed.

                                                                                Find this resource:

                                                                                • Puffer, S., J. Torgerson, and J. Watson. 2005. Cluster randomized controlled trials. Journal of Evaluation in Clinical Practice 11.5: 479–483.

                                                                                  DOI: 10.1111/j.1365-2753.2005.00568.xSave Citation »Export Citation »E-mail Citation »

                                                                                  Puffer and colleagues suggest that general tools for evaluating methodological quality do not always adequately address issues related to cluster randomized trials. The paper discusses the design features of a cluster trial that make it particularly vulnerable to bias. Suggestions for avoiding bias are also discussed. Available online with registration.

                                                                                  Find this resource:

                                                                                  • Quality Assessment Tool for Quantitative Studies. Effective Public Health Practice Project.

                                                                                    Save Citation »Export Citation »E-mail Citation »

                                                                                    The EPHPP is a Canada-based team aiming to inform public health services. The Quality Assessment Tool for Quantitative Studies describes criteria for assessing the methodological quality of designs, providing an overall or global rating of the quality of a study as well as individual ratings of key methodological criteria.

                                                                                    Find this resource:

                                                                                    • Rychetnik, L., M. Frommer, P. Hawe, and A. Shiell. 2002. Criteria for evaluating evidence on public health interventions. Journal of Epidemiology and Community Health 56.2: 119–127.

                                                                                      DOI: 10.1136/jech.56.2.119Save Citation »Export Citation »E-mail Citation »

                                                                                      This paper explores the extent to which evaluative research on public health interventions can be adequately appraised by applying established criteria for judging methodological quality. Additional aspects of evidence on public health interventions not covered by the established criteria are proposed.

                                                                                      Find this resource:

                                                                                      • Schulz, K. F., I. Chalmers, R. J. Hayes, and D. G. Altman. 1995. Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. Journal of the American Medical Association 273.5: 408–412.

                                                                                        DOI: 10.1001/jama.273.5.408Save Citation »Export Citation »E-mail Citation »

                                                                                        The study assesses the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzes the associations between those assessments and estimated treatment effects. The study shows that larger effects were found where treatment allocation had been inadequately concealed from the subjects in the trials. Trials that were not double-blind also yielded larger estimates of effects. Available online by subscription.

                                                                                        Find this resource:

                                                                                        • West, S., V. King, T. S. Carey, et al. 2002. Systems to rate the strength of scientific evidence. Evidence Reports / Technology Assessments 47. Rockville, MD: Agency for Healthcare Research and Quality.

                                                                                          Save Citation »Export Citation »E-mail Citation »

                                                                                          This report provides a good overview of the many types of scales and systems that can be used to assess the quality of individual studies and bodies of evidence. Most important, it provides the reader with a set of criteria that may be used for determining how to assess or interpret evidence. These criteria were used as the basis for the authors’ assessment of existing scales and systems for evidence appraisal. Available online.

                                                                                          Find this resource:

                                                                                          Reporting

                                                                                          Following widespread recognition that the reporting of randomized controlled trials (RCTs) was often lacking in essential details, various initiatives led to the publication of the CONSORT (Consolidated Standards of Reporting Trials) statement in the 1990s. This statement provides a checklist for authors to use when reporting on RCTs, covering all aspects of the design, conduct analysis, and interpretation of the study. Moher, et al. 2010 presents the CONSORT 2010 Statement, a revised explanatory and elaboration document. The associated website offers helpful resources regarding what to include in publications regarding a randomized controlled trial. Numerous reviews have attempted to evaluate the effectiveness of the CONSORT guidelines.

                                                                                          Limitations of Design

                                                                                          The challenges in using randomized controlled trials (RCTs) to evaluate complex health promotion interventions are addressed both in Rychetnic, et al. 2002 and Sanson-Fisher, et al. 2007. A number of alternative research designs are proposed by the authors of these articles to address such challenges. Additional considerations that affect research design, including a broad assessment of impact and effectiveness, are described in the RE-AIM (reach, effectiveness, adoption, implementation, and maintenance) framework (Glasgow, et al. 1999; Green and Glasgow 2006). Glasziou, et al. 2004 addresses the implications of relying on RCT-level evidence to assess intervention effect sizes, and Craig, et al. 2008 describes how to develop and evaluate complex interventions. Bassler, et al. 2010 questions the practice of truncating a RCT early if there is perceived evidence of benefit. Guyatt, et al. 2011 discusses the limitations of RCTs and how the risk of bias contributes to a trial’s quality.

                                                                                          • Bassler, D., M. Briel, V. M. Montori, et al. 2010. Stopping randomized trials early for benefit and estimation of treatment effects: Systematic review and meta-regression analysis. Journal of the American Medical Association 303.12: 1180–1187.

                                                                                            DOI: 10.1001/jama.2010.310Save Citation »Export Citation »E-mail Citation »

                                                                                            This systemic review and meta-analysis concludes that RCTs truncated early for perceived evidence of benefit are misleading because of overestimation of treatment effects. Bias can occur early in the progress of a RCT from random fluctuations in results. Available online.

                                                                                            Find this resource:

                                                                                            • Craig, P., P. Dieppe, S. Macintrye, S. Michie, I. Nazareth, and M. Petticrew. 2008. Developing and evaluating complex interventions: The new Medical Research Council guidance. British Medical Journal 337.7676: 979–983.

                                                                                              Save Citation »Export Citation »E-mail Citation »

                                                                                              This paper extends the Medical Research Council Guidelines for the Evaluation of Complex Interventions. The authors describe the steps involved in developing and evaluating complex interventions. Randomized controlled trials that include careful pilot testing, development work, and the collection of process measures are recommended.

                                                                                              Find this resource:

                                                                                              • Glasgow, R. E., T. M. Vogt, and S. M. Boles. 1999. Evaluating the public health impact of health promotion interventions: The RE-AIM framework. American Journal of Public Health 89.9: 1322–1327.

                                                                                                DOI: 10.2105/AJPH.89.9.1322Save Citation »Export Citation »E-mail Citation »

                                                                                                A framework for evaluating health promotion interventions is described in the context of considering factors likely to be important to clinicians, policy makers, and consumers. These factors include reach, efficacy, extent of adoption, implementation, and sustainability over time. Available online.

                                                                                                Find this resource:

                                                                                                • Glasziou, P., J. Vandenbroucke, and I. Chalmers. 2004. Assessing the quality of research. British Medical Journal 328.7430: 39–41.

                                                                                                  DOI: 10.1136/bmj.328.7430.39Save Citation »Export Citation »E-mail Citation »

                                                                                                  The implications of a reliance on levels of evidence to assess intervention impact are considered, including methodological quality and effect sizes. It is argued that a range of research designs is needed to answer questions about outcomes important to consumers. Available online with registration.

                                                                                                  Find this resource:

                                                                                                  • Green, L. W., and R. E. Glasgow. 2006. Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Evaluation & the Health Professions 29.1: 126–153.

                                                                                                    DOI: 10.1177/0163278705284445Save Citation »Export Citation »E-mail Citation »

                                                                                                    Criteria are presented for assessing the generalizability of health promotion interventions to the real-world context. Historical and theoretical perspectives are addressed. Available online for a fee.

                                                                                                    Find this resource:

                                                                                                    • Guyatt, G. H., A. D. Oxman, G. Vist, et al. 2011. GRADE guidelines, 4: Rating the quality of evidence—study limitations (risk of bias). Journal of Clinical Epidemiology 64.4: 407–415.

                                                                                                      DOI: 10.1016/j.jclinepi.2010.07.017Save Citation »Export Citation »E-mail Citation »

                                                                                                      The GRADE system provides a framework for rating quality of evidence, similar to the Cochrane risk-of-bias tool discussed earlier. This particular article discusses the limitations of RCTs and provides guidelines for rating studies depending on their risk of bias. A low risk of bias indicates a high-quality study, and an RCT (usually the gold standard for a clinical trial) can be deemed of poor quality if the risk of bias is high. Available online through subscription or by purchase.

                                                                                                      Find this resource:

                                                                                                      • Roseman, M., K. Milette, L. A. Bero, et al. 2011. Reporting of conflicts of interest in meta-analyses of trials of pharmacological treatments. Journal of the American Medical Association 305.10: 1008–1017.

                                                                                                        DOI: 10.1001/jama.2011.257Save Citation »Export Citation »E-mail Citation »

                                                                                                        The authors found that disclosure of a conflict of interest was rarely picked up by meta-analysis, despite the original individual RCTs reporting a conflict of interest.

                                                                                                        Find this resource:

                                                                                                        • Rothwell, P. M. 2005. External validity of randomised controlled trials: “To whom do the results of this trial apply?” The Lancet 365.9453: 82–93.

                                                                                                          DOI: 10.1016/S0140-6736(04)17670-8Save Citation »Export Citation »E-mail Citation »

                                                                                                          A review that discusses the need for greater consideration of external validity in the design and reporting of RCTs. External validity refers to whether the results of a study can be generalized, or applied to a definable group of patients in a particular clinical setting, and the author finds that guidelines and procedures for assessing external validity of a RCT is lacking. Available online.

                                                                                                          Find this resource:

                                                                                                          • Rychetnic, L., M. Frommer, P. Hawe, and A. Shiell. 2002. Criteria for evaluating evidence on public health interventions. Journal of Epidemiology and Community Health 56.2: 119–127.

                                                                                                            DOI: 10.1136/jech.56.2.119Save Citation »Export Citation »E-mail Citation »

                                                                                                            The inherent difficulties of using the RCT design for evaluation of public health interventions are presented. The authors emphasize the importance of real-world applicability in the intervention and, therefore, in the evaluation approach.

                                                                                                            Find this resource:

                                                                                                            • Sanson-Fisher, R. W., B. Bonevski, L. W. Green, and C. D’Este. 2007. Limitations of the randomized controlled trial in evaluating population-based health interventions. American Journal of Preventive Medicine 33.2: 155–161.

                                                                                                              DOI: 10.1016/j.amepre.2007.04.007Save Citation »Export Citation »E-mail Citation »

                                                                                                              The authors describe the limitations of randomized controlled trials for examining the effectiveness of population health interventions. Emphasis is given to strategies for evaluating interventions involving whole communities, hospitals, or geographical areas. Available online with registration.

                                                                                                              Find this resource:

                                                                                                              • Williams, B. A. 2010. Perils of evidence-based medicine. Perspectives in Biology and Medicine 53.1: 106–120.

                                                                                                                DOI: 10.1353/pbm.0.0132Save Citation »Export Citation »E-mail Citation »

                                                                                                                An article describing some of the limitations of evidence-based medicine, and RCTs. In particular, the author discusses the weakness of RCTs for individual patients. Available online by subscription or purchase.

                                                                                                                Find this resource:

                                                                                                                back to top

                                                                                                                Article

                                                                                                                Up

                                                                                                                Down