Education Single-Subject Research Design
by
Timothy J. Lewis, Nicholas Gage
  • LAST REVIEWED: 19 November 2019
  • LAST MODIFIED: 25 June 2013
  • DOI: 10.1093/obo/9780199756810-0103

Introduction

Single-subject research, at times referred to as single-case research, is a quantitative approach to examine functional relationships between baseline and experimental conditions over time within individual subjects. The central features of single-subject research include collecting repeated measures of behavior through direct observation across several sessions, comparing rates or amount of behavior between baseline or typical conditions to an intervention condition, and repeating baseline and intervention phases to note a functional relationship between the introduction and withdrawal of the intervention or independent variable (IV) and the subject’s behavior or dependent variable (DV). Collected observational data are converted to a standard metric and plotted in a line graph and visually analyzed to note variations in trend, level, and variability of the data across baseline and intervention conditions.

General Overviews

First described by Murray Sidman in 1960 (Sidman 1960) to study behavioral principles within psychology and then later expanded to become a central element of applied behavior analysis (Baer, et al. 1968; Baer, et al. 1987; Cooper, et al. 2007), single-subject research is used across several disciplines including special and general education, social work, communication sciences, and rehabilitative therapies. Horner and colleagues report that more than forty-five scholarly journals accept and publish single-subject research studies (Horner, et al. 2005). See also Campbell and Stanley 1963, Kazdin and Tuma 1982, and Kratochwill and Levin 1992.

  • Baer, Donald M., Montrose M. Wolf, and Todd R. Risley. 1968. Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis 1.1: 91–97.

    DOI: 10.1901/jaba.1968.1-91Save Citation »Export Citation » Share Citation »

    A seminal article in the field of applied behavior analysis, this article operationally defines the essential features of applied behavior analysis and the experimental conditions under which applied behavior analysis principles can and should be studied.

    Find this resource:

  • Baer, Donald M., Montrose M. Wolf, and Todd R. Risley. 1987. Some still-current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis 20.4: 313–327.

    DOI: 10.1901/jaba.1987.20-313Save Citation »Export Citation » Share Citation »

    A follow-up to the seminal Baer, et al. 1968, this article provides the foundation for the applied behavior analysis field by outlining the general premise of applied behavior analytic research, which is the foundation of single-subject designs. The paper articulates that analysis of an effect in behavior analytic research, particularly single-subject research, should focus on practical significance that can be observed.

    Find this resource:

  • Campbell, Donald T., and Julian C. Stanley. 1963. Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.

    Save Citation »Export Citation » Share Citation »

    A seminal text on conducting behavioral research, Campbell and Stanley provide a chapter on single-subject designs and the framework for what has become the standard in threats to internal and external validity and how each design accounts for these.

    Find this resource:

  • Cooper, John O., Timothy E. Heron, and William L. Heward. 2007. Applied behavior analysis. 2d ed. Upper Saddle River, NJ: Pearson/Merrill-Prentice Hall.

    Save Citation »Export Citation » Share Citation »

    This textbook provides an overview of single-subject research designs and, importantly, details how to assess behavior change using visual analysis in clinical/applied and research contexts within an applied behavior analysis context.

    Find this resource:

  • Horner, Robert H., Edward G. Carr, James Halle, Gail McGee, Samuel Odom, and Mark Wolery. 2005. The use of single-subject research to identify evidence-based practices in special education. Exceptional Children 71.2: 165–179.

    Save Citation »Export Citation » Share Citation »

    This article outlines the essential features within and across single-subject research studies to ascertain a minimal level of evidence to brand the practice under investigation “evidence-based.”

    Find this resource:

  • Kazdin, Alan E., and A. Hussain Tuma, eds. 1982. Single-case research designs. New Directions for Methodology of Behavioral Science 13. San Francisco: Jossey-Bass.

    Save Citation »Export Citation » Share Citation »

    This edited text provides a rationale and the basic logic of single-subject research within the context of traditional psychological and clinical research. Although dated with respect to current issues and design variations, the text provides an historical context and establishes the roots of single-subject research.

    Find this resource:

  • Kratochwill, Thomas R., and Joel R. Levin, eds. 1992. Single-case research design and analysis: New directions for psychology and education. Hillsdale, NJ: Lawrence Erlbaum.

    Save Citation »Export Citation » Share Citation »

    This text provides a series of chapters setting the stage for contemporary issues related to single-subject research including limitations of visual analysis, effect size, statistical analysis, and the appropriateness of meta-analyses across single-subject research. The book ends with a chapter on the current state of the art at the time and recommended future directions.

    Find this resource:

  • Sidman, Murray. 1960. Tactics of scientific research: Evaluating experimental data in psychology. New York: Basic Books.

    Save Citation »Export Citation » Share Citation »

    A seminal text that laid the foundation for single-subject research within behavioral psychology.

    Find this resource:

Basics of Single-Subject Research Design

Unlike group design experimental research, in which one group serves as a control or nontreatment group and the other group receives the intervention, within single-subject research each subject serves as a control through the replication of baseline phases of the study. During the baseline phase of the study direct observation data are collected under business-as-usual conditions. For example, if the researcher is interested in the impact of a specific social behavioral intervention within a preschool during free play conditions, the teacher would be instructed to follow normal routines and procedures. To determine a functional relationship between the IV and DV, each single-subject study must minimally introduce two replications of the IV and baseline conditions. The number of data points collected within each condition or phase is determined when clear pattern is observed (i.e., low variability, clear increasing, decreasing or flat trend; see Visual Analysis). Several design variations are available within single-subject research, all of which incorporate the basic features of comparing a participant’s behavior across replications of baseline and intervention conditions. The primary design is referred to as a withdrawal or reversal design in which a baseline (A condition) rate of behavior is established then a single IV (B condition) is introduced. Once a clear pattern of behavior is noted through visual analysis, the researcher withdraws the intervention or reverts to baseline conditions. Following a clear and stable pattern of observed behavior within the second baseline condition, the IV is reintroduced producing an A–B–A–B series of phases. Variations on this design can include introducing a second IV (C condition) or combinations of IVs (B + C condition) allowing the research to compare several variables or combinations of variables to baseline condition and across various iterations of the IV (e.g., A–B–A–C–A–BC). Common additional single-subject design variations include alternating or simultaneous treatment designs in which, following a baseline condition, each subject receives two or three variations of the IV within a single or closely timed daily session. A final common variation is the use of a multiple- baseline design. Multiple-baseline designs meet the IV replication by staggering the introduction of the IV, following baseline across two to three subjects or across two to three behaviors or settings for the same subject. Baseline data are collected simultaneously across all two or three subjects or conditions. The IV is introduced for the first subject or behavior while baseline data continue to be collected for the remaining subjects or conditions. Once a functional relationship is observed for the first subject (i.e., a clear trend or level change in the DV), the IV is introduced for the second subject or condition. Once a functional relationship is observed in the second condition, the IV is introduced for the final subject/condition. An overall functional relationship is determined through the repeated impact of the IV on the DV across the two or three baselines. The list of sources in this section provides key works on design development and implementation, including the seminal text Kazdin 2011, as well as more comprehensive texts such as Tawney and Gast 1984 and Gast 2010. Barlow, et al. 2009 and Richard, et al. 1999 provide further information on designs and address more contemporary issues. See also Morgan and Morgan 2009 and Riley-Tillman and Burns 2009.

  • Barlow, David H., Matthew K. Nock, and Michel Hersen. 2009. Single case experimental designs: Strategies for studying behavior change. 3d ed. Boston: Pearson.

    Save Citation »Export Citation » Share Citation »

    Similar to other texts, the authors provide detailed chapters on the various single-subject designs. A strength to this widely used text is the inclusion of issues related to conducting single-subject research, general procedures common to all design variations, and an expanded statistical analysis with single-subject research chapter that includes most of the currently used statistical procedures.

    Find this resource:

  • Gast, David L. 2010. Single-subject research methodology in behavioral sciences. New York: Routledge.

    Save Citation »Export Citation » Share Citation »

    Gast provides a comprehensive text on design variations and issues related to conducting quality single-subject research. In addition to core design issues, the author provides chapters on issues related to conducting applied research in educational and clinical settings, ethics related to conducting applied research, and developing single-subject research reports suitable for peer-reviewed periodicals.

    Find this resource:

  • Kazdin, Alan E. 2011. Single-case research designs: Methods for clinical and applied settings. 2d ed. New York: Oxford Univ. Press.

    Save Citation »Export Citation » Share Citation »

    This book has become the seminal text on single-subject research design. Kazdin provides detailed chapters on the most commonly used single-subject designs, examples from peer-reviewed journals, and the historical context for the use of single-subject design in behavioral research.

    Find this resource:

  • Morgan, David L., and Robin K. Morgan. 2009. Single-case research methods for the behavioral and health sciences. Thousand Oaks, CA: SAGE.

    Save Citation »Export Citation » Share Citation »

    Morgan and Morgan provide a basic text that covers all the key elements of single-subject research designs. The text is appropriate for an introductory course in single-subject research or supplemental text for a research design course. The authors provide a nice final chapter on current issues and future directions in single-subject research, such as the appropriateness of conducting meta-analyses.

    Find this resource:

  • Richard, Steven B., Ronald L. Taylor, Rangasamy Ramasamy, and Rhonda Y. Richards. 1999. Single-subject research: Applications in educational and clinical settings. San Diego, CA: Singular.

    Save Citation »Export Citation » Share Citation »

    Text provides chapters focusing on common single-subject designs, issues related to validity of single-subject research, and basic direct observational data collection. At the end of each chapter the authors provide a summary checklist that defines key terms contained within the chapter.

    Find this resource:

  • Riley-Tillman, T. Chris, and Matthew K. Burns. 2009. Evaluating educational interventions: Single-case design for measuring response to intervention. New York: Guilford.

    Save Citation »Export Citation » Share Citation »

    One of a series of texts on response to intervention, this text provides a solid overview of single-subject research with an emphasis on academic behavior. The text is useful for both researchers and practitioners attempting to make systematic data-based decisions relative to instruction for at-risk students.

    Find this resource:

  • Tawney, James W., and David L. Gast. 1984. Single-subject research in special education. Columbus, OH: Merrill.

    Save Citation »Export Citation » Share Citation »

    Like Kazdin 2011, this text provides chapters devoted to the various single-subject designs and uses peer-reviewed articles as examples. A real strength of the text is the step-by-step methodology the authors propose to conduct visual analysis, the primary data analysis within single-subject research.

    Find this resource:

Data Collection

Like all research, accurate and reliable measurement of DVs is essential. A hallmark of single-subject research is continuous direct observation of subject behavior. Unlike some group designs that may include a repeated measure, single-subject research does not use standardized or perceptual measures (i.e., rating scales) as the primary unit of analysis. Single-subject research requires multiple probes of the DV, typically daily, through direct observation of behavior. Direct observation data falls into two categories: (1) event based or (2) interval based. Event-based strategies record each instance of behavior to provide a direct measure of the targeted behavior. Basic event-based strategies include (a) frequency counts, which are then converted to rate per minute and (b) duration recordings, which are converted to a percentage of time. Interval-based strategies record the presence or absence of the target behavior after a brief passage of time (e.g., thirty seconds). Interval-based strategies include (a) whole interval (the subject must engage in the target behavior during the entire interval); (b) partial interval (the subject engages in the target behavior at any point during the interval); and (c) momentary time sampling (the subject is engaging in the target behavior at the end of the interval). All interval-based data collection strategies are reported as percentage of intervals. Similar to group design research, measure reliability and validity are also essential. Within single-subject research, reliability is determined through interobserver agreement. At minimum, two data collectors independently gather direct observation data and compute the percentage of agreement through various formulas based on the type of data collected (e.g., frequency, duration, interval). Most single-subject research collects interobserver data across 25 to 30 percent of the total observations sessions within the study. Validity of the DV is typically established by using previously published or commonly established operational definitions of the behavior paired with commonly used direct observation strategies. Chafouleas, et al. 2007 provides a broad range of behavioral assessment strategies applicable for educational settings, while Miltenberger 2005 provides a range of direct observation and data collection strategies along with examples relative to education. Thompson, et al. 2000 provides direct observation and data collection definitions and strategies across a broad range of populations. See also Alberto and Troutman 2012 and Yoder and Symons 2010.

  • Alberto, Paul A., and Anne C. Troutman. 2012. Applied behavior analysis for teachers. 9th ed. Upper Saddle River, NJ: Merrill.

    Save Citation »Export Citation » Share Citation »

    Seminal text on behavior management, Alberto and Troutman provide chapters on direct observation data collection strategies, operationally defining target behavior, and basics of converting data to a standard metric suitable for graphing.

    Find this resource:

  • Chafouleas, Sandra M., T. Chris Riley-Tillman, and George Sugai. 2007. School-based behavior assessment: Informing intervention and instruction. New York: Guilford.

    Save Citation »Export Citation » Share Citation »

    The authors provide a chapter specific to direct observation of student social and academic behavior.

    Find this resource:

  • Miltenberger, Raymond G. 2005. Strategies for measuring behavior change. In Individualized supports for students with problem behaviors: Designing positive behavior plans. Edited by Linda M. Bambura and Lee Kern, 107–128. New York: Guilford.

    Save Citation »Export Citation » Share Citation »

    The author includes sample direct observation data collection tools.

    Find this resource:

  • Thompson, Travis, David Felce, and Frank J. Symons, eds. 2000. Behavioral observation: Technology and application in developmental disabilities. Baltimore: Brookes.

    Save Citation »Export Citation » Share Citation »

    The text provides several chapters focusing on use of technology, albeit somewhat dated given the rapid pace of technology advancement, to assist in direct observations.

    Find this resource:

  • Yoder, Paul, and Frank Symons. 2010. Observational measurement of behavior. New York: Springer.

    Save Citation »Export Citation » Share Citation »

    The authors provide a range of strategies to measure social behavior. In addition, the text provides information on reliability and validity of direct observation data along with strategies to train data collectors to gather reliable data.

    Find this resource:

Visual Analysis

Whereas group designs allow the researcher to determine the impact of the intervention or the IV through a statistical comparison across group means, single-subject research relies on visual analysis to determine the direction and impact of the IV on the DV. Visual analysis of single-subject research results are typically conducted using visual analysis, a technique in which a graphic display of data is examined to draw reasonable conclusions or make reasonable hypotheses about the relationship or lack thereof between control and experimental conditions. Visual analysis relies on the analysis and interpretation of six features of a graphic display of data: (1) level, (2) trend, (3) variability, (4) immediacy of the effect, (5) overlap, and (6) consistency of data pattern across similar phases (Kratochwill, et al. 2010). Visual analysis allows researchers to identify the effect of an IV on a behavior or set of behaviors across time using each data point to identify level, trend, and variability within, across, and between conditions. Unlike statistical analyses, which use commonly accepted computations, visual analysis relies on expert opinion to determine the overall impact of the IV on the DV. Given the subjective nature of visual analysis, the field has recently attempted to apply statistical analysis to determine functional relationships. In addition, with the recent emphasis on using effect size both within and across studies to determine whether interventions meet effectiveness standards to be branded an “evidence-based” practice, single-subject researchers have recently attempted also to develop standard measures to determine effect size that would allow for meta-analyses across multiple single-subject studies. The remainder of the essay focuses on recent work examining visual and statistical analyses of single-subject research and the emerging field of conducting meta-analyses across single-subject research. Highlighted issues include the lack of formal decision rules and the reliability of the method (Jones, et al. 1978; Matyas and Greenwood 1990; Park, et al. 1990). See also Kahng, et al. 2010; Hagopian, et al. 1997; and Harbst, et al. 1991.

  • Fisher, Wayne W., Michael E. Kelley, and Joanna E. Lomas. 2003. Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. Journal of Applied Behavior Analysis 36.3: 387–406.

    DOI: 10.1901/jaba.2003.36-387Save Citation »Export Citation » Share Citation »

    This article reports the results of a series of studies examining the impact of structured criteria, a modified split-middle procedure, to increase the reliability of visual inspection. Across the studies, Fisher and colleagues find that the structured criteria are more accurate than statistical tests and, when taught to raters, significantly increases interrater agreement, suggesting that training and structure visual analysis criteria increase interrater agreement and accuracy.

    Find this resource:

  • Hagopian, Louis P., Wayne W. Fisher, Rachel H. Thompson, Jamie Owen-Deschryver, Brian A. Iwata, and David P. Wacker. 1997. Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis 30.2: 313–326.

    DOI: 10.1901/jaba.1997.30-313Save Citation »Export Citation » Share Citation »

    This article reports on a series of studies examining whether structured criteria for visual analysis increase interrater agreement of multi-element single-subject designs. The first study found little agreement between independent raters; however, with the addition of structured criteria, agreement significantly increased. The results indicate that structured criteria can be trained and increase interrater agreement in visual analysis.

    Find this resource:

  • Harbst, Kimberly B., Kenneth J. Ottenbacher, and Susan R. Harris. 1991. Interrater reliability of therapists’ judgments of graphed data. Physical Therapy 71.2: 107–115.

    Save Citation »Export Citation » Share Citation »

    This study examines the interrater agreement between thirty physical therapists using visual analysis and a split-middle trend line. The results indicate that interrater agreement is low overall (intraclass correlation coefficients from 0.37 to 0.55) and that no particular feature of the graphs predicts agreements. Results indicate low agreement even with a structured criterion.

    Find this resource:

  • Jones, Richard R., Mark R. Weinrott, and Russell S. Vaught. 1978. Effects of serial dependency on the agreement between visual and statistical inference. Journal of Applied Behavior Analysis 11.2: 277–283.

    DOI: 10.1901/jaba.1978.11-277Save Citation »Export Citation » Share Citation »

    This study examines the influence of serial dependence (i.e., autocorrelation) on visual analysis agreement. Most prior studies find mean shift and trend to be predictors of disagreements between raters. This study finds that serial dependence also negatively influences agreement in visual analysis.

    Find this resource:

  • Kahng, Sung Woo, Kyong-Mee Chung, Katharine Gutshall, Steven C. Pitts, Joyce Kao, and Kelli Girolami. 2010. Consistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis 43.1: 35–45.

    DOI: 10.1901/jaba.2010.43-35Save Citation »Export Citation » Share Citation »

    This study replicates and updates DeProspero and Cohen’s 1979 study “Inconsistent Visual Analyses of Intrasubject Data” (Journal of Applied Behavior Analysis 12.4: 573–579) with forty-five peer reviewers from the Journal of Applied Behavior Analysis and Journal of Experimental Analysis of Behavior, finding results contradictory to previous interrater agreement research. Overall, the raters in this study are very close in agreement (r = 0.93) and demonstrate that well-trained reviewers can agree on the analysis of effectiveness using visual analysis.

    Find this resource:

  • Kratochwill, T. R., J. Hitchcock, R. H. Horner, et al. 2010. Single-case designs technical documentation. Washington, DC: What Works Clearinghouse.

    Save Citation »Export Citation » Share Citation »

    This technical report provides a complete overview of visual analysis and directions for how to conduct visual analysis based on criteria developed for What Works Clearinghouse, a part of the US Department of Education. This document is the most up-to-date with regard to applying visual analysis to single-subject research.

    Find this resource:

  • Matyas, Thomas A., and Kenneth M. Greenwood. 1990. Visual analysis of single-case time series: Effects of variability, serial dependence, and magnitude of intervention effects. Journal of Applied Behavior Analysis 23.3: 341–351.

    DOI: 10.1901/jaba.1990.23-341Save Citation »Export Citation » Share Citation »

    This study examines the accuracy of thirty-seven postgraduate students taking a course in a single-subject research class. The results identify that the judges’ accuracy rates were very low and that serial dependence had a negative effect on raters’ ability to identify accurately an intervention effect. However, the raters were not well-trained peer reviewers but students learning about single-subject research.

    Find this resource:

  • Park, Hyun-Sook, Leonard Marascuilo, and Robert Gaylord-Ross. 1990. Visual inspection and statistical analysis of single-case designs. Journal of Experimental Education 58.4: 311–320.

    Save Citation »Export Citation » Share Citation »

    This study examines the reliability of five well-trained judges on the identification of an intervention effect using visual analysis and comparisons among the judges using a statistical analysis of effect (randomization test). Overall, the judges’ interrater agreement was moderate (54–74 percent), with higher agreements for studies without an effect. Agreement was also high among judges and the statistical analysis for no intervention effect graphs. Available online for purchase or by subscription.

    Find this resource:

Statistical Analysis

Advancements in the analysis of single-subject research have been pushed by the application of parametric and nonparametric approaches for the calculation of an effect size to support evidence-based interventions (Maggin, et al. 2011). Although several scholars (see Hopkins, et al. 1998) have cautioned against effect sizes in single-subject research, a cadre of others have advocated for and examined their utility (see Nourbakhsh and Ottenbacher 1994). The role of calculated effect sizes in single-case research is not a new phenomenon and research continues to develop new methods to accommodate the unique needs of the design (Maggin, et al. 2011; Parker, et al. 2009). See also the studies by Manolov and colleagues (Manolov, et al. 2010a; Manolov, et al. 2010b) as well as Fisch 2001 and Parker and Brossart 2003.

  • Fisch, Gene S. 2001. Evaluating data from behavioral analysis: Visual inspection or statistical models? Behavioural Processes 54.1–3: 137–154.

    DOI: 10.1016/S0376-6357(01)00155-3Save Citation »Export Citation » Share Citation »

    This article provides an overview of six nonparametric approaches for calculating intervention effects in single-subject design research. Fisch contends that the unreliability of visual analysis requires the addition of nonparametric test to accompany visual interpretations of effectiveness. Available online for purchase or by subscription.

    Find this resource:

  • Hopkins, B. L., Brian L. Cole, and Tina L. Mason. 1998. A critique of the usefulness of inferential statistics in applied behavior analysis. Behavior Analyst 21.1: 125–137.

    Save Citation »Export Citation » Share Citation »

    This article contends that inferential statistics in single-subject design are as subjective as visual analysis and that the use of inferential statistics adds nothing to the complex nature of single-subject design research.

    Find this resource:

  • Maggin, Daniel M., Hariharan Swaminathan, Helen J. Rogers, Breda V. O’Keefe, George Sugai, and Robert H. Horner. 2011. A generalized least squares regression approach for computing effect sizes in single-case research: Application examples. Journal of School Psychology 49.3: 301–321.

    DOI: 10.1016/j.jsp.2011.03.004Save Citation »Export Citation » Share Citation »

    This study is the first application of the generalized least squares regression approach for analyzing single-subject research. The study demonstrates that the modeling procedure addresses all concerns about statistical applications to single-subject research (e.g., serial dependence). This approach holds much promise but has not been fully field tested. Available online for purchase or by subscription.

    Find this resource:

  • Manolov, Rumen, Jaume Arnau, Antonio Solanas, and Roser Bono. 2010a. Regression-based techniques for statistical decision making in single-case designs. Psicothema 22.4: 1026–1032.

    Save Citation »Export Citation » Share Citation »

    This study examines the reliability of four (ordinary least squares) regression-based approaches for the analysis of single-subject research in the presence of autocorrelation and short time series. Results indicate that none of the four accurately assess the single-subject data in the presence of high autocorrelation.

    Find this resource:

  • Manolov, Rumen, Antonio Solanas, Isis Bulte, and Patrick Onghena. 2010b. Data-division-specific robustness and power of randomization tests for ABAB designs. Journal of Experimental Education 78.2: 191–214.

    DOI: 10.1080/00220970903292827Save Citation »Export Citation » Share Citation »

    This study conducts a Monte Carlo test of the randomization test applied to ABAB designs with varying levels of autocorrelation. Overall, autocorrelation significantly impacts the accuracy of results, while only data with large treatment effects are found to be consistently accurate. Available online for purchase or by subscription.

    Find this resource:

  • Nourbakhsh, Mohammed R., and Kenneth J. Ottenbacher. 1994. The statistical analysis of single-subject data: A comparative analysis. Physical Therapy 74.8: 768–776.

    Save Citation »Export Citation » Share Citation »

    This study is an early attempt to examine agreement between different statistical approaches for analyzing single-subject research. The authors find little agreement across the three methods assessed, indicating that these methods are as reliable as visual analysis.

    Find this resource:

  • Parker, Richard I., and Daniel F. Brossart. 2003. Evaluating single-case research data: A comparison of seven statistical methods. Behavior Therapy 34.2: 189–211.

    DOI: 10.1016/S0005-7894(03)80013-8Save Citation »Export Citation » Share Citation »

    This study examines the relationship between and power of seven different statistical approaches for analyzing single-subject research. Four of the approaches demonstrate adequate power, but variability in overall effect sizes across the approaches makes identification of a recommended approach difficult. Available online for purchase or by subscription.

    Find this resource:

  • Parker, Richard I., Kimberly Vannest, and Leanne Brown. 2009. The improvement rate difference for single-case research. Exceptional Children 75.2: 135–150.

    Save Citation »Export Citation » Share Citation »

    This article provides a description of an overlap method for analysis of single-subject research that can be calculated quickly from a graph and has confidence intervals. Parker and colleagues provide a description of how to calculate improvement rate difference and provide an empirical example.

    Find this resource:

Meta-analysis and Single-Subject Designs

A number of methodological issues have been raised about the application of meta-analytic techniques in the synthesis of single-subject research, specifically concerning methodological and conceptual issues. The methodological issues include the inferential statistics assumptions of independence and reliable estimates of mean shift, slope, and variability. Conceptual criticisms have been (1) the use of effect sizes to provide statistical control, (2) determination of effect by statistical process and not visual analysis, and (3) the argument that growth in knowledge is through replication and not statistical synthesis (see Busk and Serin 1992; Jenson, et al. 2007). However, although concerns have been forwarded, meta-analyses and the development of different synthesis procedures have proliferated. The goal of a meta-analysis is to provide an omnibus result to identify what we know about overall outcomes from an intervention or treatment. Whether agreement across fields will be obtained is unknown and speculative at this point, but it is clear from the growing research interest and modeling procedures for single-subject meta-analysis that the procedures will continue to proliferate and inform future research and practice. Wolery, et al. 2010 indicates that by applying a meta-analytic approach the variations in situations, procedures, and participant characteristics that replication extends is lost. See Van den Noorgate and Onghena 2003 and Van den Noorgate and Onghena 2008 for information on multilevel models as well as Beretvas and Chung 2008; Shadish, et al. 2008; and Ugille, et al. 2012.

  • Beretvas, S. Natasha, and Hyewon Chung. 2008. A review of meta-analyses of single-subject experimental designs: Methodological issues. Evidence-Based Communication Assessment and Intervention 2.3: 129–141.

    DOI: 10.1080/17489530802446302Save Citation »Export Citation » Share Citation »

    This article provides a statistical review of the methodological concerns of using meta-analysis with single-subject research. Beretvas and Chung then provide guidelines for future statistical synthesis approaches. Available online for purchase or by subscription.

    Find this resource:

  • Busk, Patricia L., and Ronald C. Serin. 1992. Meta-analysis for single-case research. In Single-case research design and analysis: New directions for psychology and education. Edited by Thomas R. Kratochwill and Joel R. Levin, 187–212. Hillsdale, NJ: Lawrence Erlbaum.

    Save Citation »Export Citation » Share Citation »

    This seminal chapter on meta-analysis of single-subject research outlines the early concerns of the procured and forwards a nonparametric model for conducting single-subject meta-analyses.

    Find this resource:

  • Jenson, William R., Elaine Clark, John C. Kircher, and Sean D. Kristjansson. 2007. Statistical reform: Evidence-based practice, meta-analyses, and single-subject designs. Psychology in the Schools 44.5: 483–493.

    DOI: 10.1002/pits.20240Save Citation »Export Citation » Share Citation »

    This article reviews the role of meta-analysis in the identification of evidence-based practices and discusses four meta-analytic approaches for single-subject research: (1) percentage of nonoverlapping data, (2) the nonparametric models of Busk and Serlin 1992, (3) interrupted time-series autocorrelation method, and (4) hierarchical linear modeling. The review identifies hierarchical linear modeling as the most promising, yet still developing, approach. Available online for purchase or by subscription.

    Find this resource:

  • Shadish, William R., David M. Rindskopf, and Larry V. Hedges. 2008. The state of the science in the meta-analysis of single-case experimental designs. Evidence-Based Communication Assessment and Intervention 2.3: 188–196.

    DOI: 10.1080/17489530802581603Save Citation »Export Citation » Share Citation »

    This seminal article describes the state of science in the quantitative synthesis of single-subject research. Shadish and colleagues review recommended procedures and identify multilevel modeling as the approach with the most promise addressing both the statistical and conceptual concerns of meta-analysis of single-subject research. Available online for purchase or by subscription.

    Find this resource:

  • Ugille, Maaike, Mariola Moeyaert, S. Natasha Beretvas, John Ferron, and Wim Van den Noorgate. 2012. Multilevel meta-analysis of single-subject experimental designs: A simulation study. Behavioral Research Methods 44. 4: 1244–1254.

    DOI: 10.3758/s13428-012-0213-1Save Citation »Export Citation » Share Citation »

    This study conducts a Monte Carlo simulation of the reliability and validity of a three-level regression model for meta-analysis of single-subject research. The results of the study indicate that the approach estimates treatment effects with sufficient power when the number of data points and the number of studies are large. Available online for purchase or by subscription.

    Find this resource:

  • Van den Noorgate, Wim, and Patrick Onghena. 2003. Combining single-case experimental data using hierarchical linear models. School Psychology Quarterly 18.3: 325–346.

    DOI: 10.1521/scpq.18.3.325.22577Save Citation »Export Citation » Share Citation »

    This article was among the first to forward hierarchical linear models (also known as multilevel models) as a methodology for meta-analyses of single-subject research. Van den Noorgate and Onghena provide an overview of their model and comparison with a standardized mean difference approach. Available online for purchase or by subscription.

    Find this resource:

  • Van den Noorgate, Wim, and Patrick Onghena. 2008. A multilevel meta-analysis of single-subject experimental design studies. Evidence-Based Communication Assessment and Intervention 2.3: 142–151.

    DOI: 10.1080/17489530802505362Save Citation »Export Citation » Share Citation »

    This article provides an up-dated hierarchical linear modeling approach with a single-effect size calculation for conducting meta-analyses of single-subject research. Available online for purchase or by subscription.

    Find this resource:

  • Wolery, Mark, Matthew Busick, Brian Reichow, and Erin E. Barton. 2010. Comparison of overlap methods for quantitatively synthesizing single-subject data. Journal of Special Education 44.1: 18–28.

    DOI: 10.1177/0022466908328009Save Citation »Export Citation » Share Citation »

    This review examines the reliability and validity of overlap methods for synthesizing single-subject research. Although the percentage of nonoverlapping data method was promising, a number of significant methodological concerns are noted. The key part of this article is the ten features a meta-analytic approach for single-subject research must address. Available online for purchase or by subscription.

    Find this resource:

back to top

Article

Up

Down