Political Science Online Public Opinion Polling
D. Sunshine Hillygus, Brian Guay
  • LAST REVIEWED: 04 June 2020
  • LAST MODIFIED: 28 August 2018
  • DOI: 10.1093/obo/9780199756223-0250


In an environment of declining survey response rates, decreased use of landline telephones, and rising costs of telephone and in-person surveys, political science researchers have increasingly turned to online surveys to measure public opinion. In light of the growing prevalence of online surveys in peer-reviewed journals, it is important to recognize the wide variation in their design and quality. In the most literal sense, an online survey simply refers to a particular mode of interviewing in which the respondent self-completes a questionnaire over the Internet, whether on a computer, tablet, or cell phone. However, conducting surveys online is far more complex than simply relocating an in-person or phone survey onto the Internet. Online surveys use a variety of different sampling designs, recruitment strategies, and implementation procedures, all of which have implications for data quality. Some online surveys use probability-based designs, such as those sampled from a known population list of e-mail addresses (e.g., a university or business). Multimode probability-based surveys are also increasingly common, such as samples drawn from address-based sampling frames in which online completion is encouraged but a telephone or mail completion option is also available. Because there is no sample frame from which to draw a random sample of e-mail addresses from the general population, however, most internet surveys use nonprobability designs, such as those in which respondents are recruited through website banner advertisements. Other nonprobability online surveys use samples from opt-in online panels, in which individuals agree to answer multiple surveys in exchange for money or other incentives. Here again, there is wide variation in design and quality. Online panel vendors differ as to whether and how they attempt to make a sample appear representative of a broader population, by using quotas, matching, and/or post-stratification weighting. They also vary in terms of their workload expectations, incentive structures, and panel management. In sum, the considerable heterogeneity in online surveys is too often overlooked. Given the discipline’s overall embrace of the method, making sense of this heterogeneity is essential to understanding how best to design and analyze online surveys and, ultimately, evaluate knowledge claims being made with the resulting data. Fortunately, research on online surveys constitutes a large and growing segment of the survey methodology literature. The purpose of this bibliography is to provide a resource for understanding the variation in methods and quality of online data collection, the relative advantages and disadvantages of using online surveys, and best practices for designing and analyzing online surveys.

General Overviews

It did not take long for the survey industry to catch up as more and more people went online toward the end of the 20th century—surveys were being conducted via e-mail as early as the 1980s (Callegaro, et al. 2015). Today, online surveys make up a significant portion of academic, business, and government survey work (Schaeffer and Dykema 2011). Although disparities in Internet access were initially viewed as a key challenge to online surveys, Internet use in the United States (and around the world) has risen steadily since the turn of the century, with nearly 90 percent of Americans having access to the Internet in 2018. With the growth in the adoption of Internet-enabled mobile devices, even traditionally harder to reach populations, such as minorities, those in rural areas, and older individuals, now have widespread access to the Internet. Nonetheless, there remains considerable debate about online survey methods, fueled by the wide variation in sampling design and implementation approaches. As a result, research on online surveys has occupied an increasingly large place in the survey methodology literature over the past two decades. This section identifies seminal texts on survey methodology that provide the necessary foundation for evaluating online surveys, as well as comprehensive reviews of the online survey research. General overviews and edited volumes—including Groves, et al. 2009; Krosnick, et al. 2015; and Wolf, et al. 2017—provide a history of survey research and guidance on the design, implementation, and evaluation of online surveys. Others focus specifically on evaluating data quality through the perspective of the total survey error approach, examining the multiple sources through which error can be introduced during the survey process (Biemer and Lyberg 2003, Weisberg 2009, Groves and Lyberg 2010). Couper 2000 and Tourangeau, et al. 2013 cover a wide range of topics specific to online surveys, including design strategies, implementation approaches, and differences between various types of online surveys.

  • Biemer, Paul P., and Lars E. Lyberg. Introduction to Survey Quality. Hoboken, NJ: John Wiley, 2003.

    DOI: 10.1002/0471458740

    A classic text that introduces principles and concepts of relevance to evaluating data quality. Biemer and Lyberg focus on methods used for evaluating survey error and improving survey quality.

  • Callegaro, Mario, Katja Lozar Manfreda, and Vasja Vehovar. Web Survey Methodology. Los Angeles: SAGE, 2015.

    This textbook provides practical guidance on techniques for designing and implementing online surveys, as well as an overview of recent research on modern advances in online survey research, such as the use of mobile technologies and mixed-mode designs.

  • Couper, Mick P. “Web Surveys: A Review of Issues and Approaches.” Public Opinion Quarterly 64.4 (2000): 464–494.

    DOI: 10.1086/318641

    In this seminal article, Couper provides a typology of Internet surveys at the outset of the ascent of online surveys, establishing language used to distinguish between the various components and categories of online surveys.

  • Groves, Robert M., Floyd J. Fowler Jr., Mick P. Couper, James M. Lepkowski, Eleanor Singer, and Roger Tourangeau. Survey Methodology. 2d ed. Wiley Series in Survey Methodology. Hoboken, NJ: John Wiley, 2009.

    This seminal text, now in its second edition, covers all aspects of the survey life cycle, from sample design, various sources of measurement and sampling error, nonresponse, design strategies, and analysis of survey data.

  • Groves, Robert M., and Lars Lyberg. “Total Survey Error: Past, Present, and Future.” Public Opinion Quarterly 74.5 (2010): 849–879.

    DOI: 10.1093/poq/nfq065

    Groves and Lyberg present a history and useful overview of the total survey error approach, emphasizing its many strengths and identifying opportunities for improvements based on its limitations.

  • Krosnick, Jon A., Stanley Presser, Kaye Husbands Fealing, Steven Ruggles, and David Vannette. The Future of Survey Research: Challenges and Opportunities; A Report to the National Science Foundation Based on Two Conferences Held on October 3–4 and November 8–9, 2012. Alexandria, VA: National Science Foundation Advisory Committee for the Social, Behavioral and Economic Sciences Subcommittee on Advancing SBE Survey Research, 2015.

    This National Science Foundation report provides a comprehensive overview of the challenges facing survey-based data collection, current best practices, and opportunities for future innovations.

  • Schaeffer, Nora Cate, and Jennifer Dykema. “Questions for Surveys: Current Trends and Future Directions.” Public Opinion Quarterly 75.5 (2011): 909–961.

    DOI: 10.1093/poq/nfr048

    The authors outline trends and themes in the survey methodology literature published in Public Opinion Quarterly from 2000 to 2010, with considerable focus on the rise and evolution of online surveys.

  • Tourangeau, Roger, Frederick G. Conrad, and Mick P. Couper. The Science of Web Surveys. Oxford: Oxford University Press, 2013.

    DOI: 10.1093/acprof:oso/9780199747047.001.0001

    A comprehensive resource for the development and evaluation of online surveys, including a review of sampling methods and measurement strategies. This text also describes statistical techniques aimed at reducing selection and coverage biases.

  • Weisberg, Herbert F. The Total Survey Error Approach: A Guide to the New Science of Survey Research. Chicago: University of Chicago Press, 2009.

    Weisberg presents an early overview of the total survey error perspective, the dominant paradigm in the field of survey methodology used to evaluate and improve surveys. This unified approach recognizes the many different sources of survey errors, including sampling, coverage, measurement, nonresponse, and post-estimation.

  • Wolf, Christof, Dominique Joye, Tom W. Smith, and Yand-chih Fu, eds. The Sage Handbook of Survey Methodology. Los Angeles: SAGE, 2017.

    This edited volume covers all aspects of survey research, with chapters on topics of special importance of online surveys, including nonprobability sampling, questionnaire design, mixed-mode surveys, and record linkage.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login.

How to Subscribe

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here.