Management Faking in Personnel Selection
by
Patrick D. Converse, Brian H. Kim, Yumiko Mochinushi, Yadi Yang
  • LAST REVIEWED: 20 January 2023
  • LAST MODIFIED: 25 September 2019
  • DOI: 10.1093/obo/9780199846740-0098

Introduction

Faking has been defined in several ways but generally refers to intentional distortion of responses on psychological measures. This behavior is a potential concern in personnel selection contexts because applicants who are motivated to obtain a job might consciously distort their responses on selection measures in an attempt to increase their chances of receiving a job offer, especially when perceiving their true ability and qualifications to be inadequate. More specifically, job applicants may exaggerate or completely fabricate positive qualities and downplay or completely deny negative qualities. Although the reasoning underlying concerns regarding faking in personnel selection is fairly easy to understand, the antecedents, nature, and consequences of faking have turned out to be less straightforward. As a result, researchers and practitioners have devoted substantial time and attention to these issues in an attempt to understand faking and address it in practical selection contexts by preventing this behavior or measuring and adjusting assessment processes to account for it. This article focuses on major examples of this work. Much of the empirical research, theoretical development, and practical interventions in this area focus specifically on self-report personality measures, but the psychological processes can often be generalized to other measures in selection such as employment interviews or biographical data inventories. The focus on personality likely stems in part from the potential practical benefits of using these measures in selection (e.g., they are easy to administer and have demonstrated criterion-related validity) and the perception that they are easily faked, as they do not imply a strictly correct answer (in contrast with cognitive tests). Thus, key examples of work on personality measures and other assessments are covered.

General Overviews

Relatively recent work on faking has been covered in two edited books. Griffith and Peterson 2006 contains fourteen chapters covering a range of topics, including the nature of faking, the history of faking research, faking antecedents, relevant research designs, measuring faking, preventing faking, faking consequences, culture and faking, and future research directions. Ziegler, et al. 2012 contains nineteen chapters and covers related topics, including faking prevalence, faking consequences, detecting and correcting for faking, preventing faking, faking in clinical and educational settings, and future directions. Both of these books provide useful general overviews of this area by covering a range of issues in a fairly accessible manner. Griffith and Robie 2013 reviews work on faking from 2006 to 2012.

  • Griffith, Richard L., and Mitchell H. Peterson, eds. A Closer Examination of Applicant Faking Behavior. Greenwich, CT: Information Age, 2006.

    Save Citation »Export Citation » Share Citation »

    Provides a general overview of work on faking. Covers questions including: What is faking? What is occurring when an applicant fakes? What are the characteristics of the “typical” faker? How do we assess whether an individual has faked? What methods can be used to reduce the impact of applicant faking?

    Find this resource:

  • Griffith, Richard L., and Chet Robie. “Personality Testing and the ‘F-Word’: Revisiting Seven Questions about Faking.” In Handbook of Personality at Work. Edited by Neil D. Christiansen and Robert P. Tett, 253–280. New York: Routledge, 2013.

    Save Citation »Export Citation » Share Citation »

    Reviews recent work on basic questions surrounding faking: Is faking an identifiable construct? Are people expected to fake? Can people fake? Do people fake? Do people differ in the ability and motivation to fake? Does faking matter? Can anything be done about faking?

    Find this resource:

  • Ziegler, Matthias, Carolyn MacCann, and Richard D. Roberts, eds. New Perspectives on Faking in Personality Assessment. New York: Oxford University Press, 2012.

    Save Citation »Export Citation » Share Citation »

    Provides an overview of faking research. Covers four general questions: Do people fake and does it matter? Can we tell if people fake? Can we stop people from faking? Is faking a consequential issue outside a job selection context?

    Find this resource:

Journals

Several journals publish research on faking. Journal of Applied Psychology, Personnel Psychology, Human Performance, and Journal of Business and Psychology publish articles across areas of industrial/organizational psychology, organizational behavior, and human resource management. This includes work on personnel selection in general and faking in particular. The International Journal of Selection and Assessment is a more focused journal that publishes articles on personnel selection, staffing, and assessment in organizations. Personality and Individual Differences publishes articles focusing on individual differences across a variety of research areas. Educational and Psychological Measurement addresses measurement issues, publishing theoretical articles focusing on new developments and applied articles focusing on new applications. The Journal of Personality and Social Psychology publishes articles across areas of personality and social psychology, including personality assessment.

Debates and Controversies

Theory and research on the topic of applicant faking has generated a number of debates and controversies surrounding various issues, including appropriate ways to define and measure this phenomenon, estimating motivational factors and the prevalence of faking in typical selection settings, and determining the impact of faking on outcomes of interest based on conflicting research findings. Morgeson, et al. 2007 summarizes a discussion held by former journal editors about concerns related to faking on personality tests in selection, where many of those points have been proposed in various works before and since this publication. Tett and Christiansen 2007 and Ones, et al. 2007 are responses to this journal editor discussion that provide arguments against many of the points made in that article. Levin and Zickar 2002 explains how some differences in research findings stem from differences in how concepts are defined and measured. Griffith and Peterson 2008 has sparked recent debate about the efficacy of social desirability measures for capturing faking constructs, while McGrath, et al. 2010 reviews attempts to use measures of socially desirable responding and response biases for improving assessment validity.

  • Griffith, Richard L., and Mitchell H. Peterson. “The Failure of Social Desirability Measures to Capture Applicant Faking Behavior.” Industrial and Organizational Psychology 1.3 (2008): 308–311.

    DOI: 10.1111/j.1754-9434.2008.00053.xSave Citation »Export Citation » Share Citation »

    Summarizes a number of criticisms of the use of socially desirable responding scales to measure applicant faking.

    Find this resource:

  • Levin, Robert A., and Michael J. Zickar. “Investigating Self-Presentation, Lies, and Bullshit: Understanding Faking and Its Effects on Selection Decisions Using Theory, Field Research, and Simulation.” In The Psychology of Work: Theoretically Based Empirical Research. Edited by Jeanne M. Brett and Fritz Drasgow, 253–276. Mahwah, NJ: Lawrence Erlbaum, 2002.

    Save Citation »Export Citation » Share Citation »

    Summarizes differences in approaches to defining faking, to modeling and measuring faking with methods/procedures, and to predicting distortions in scores.

    Find this resource:

  • McGrath, Robert E., Matthew Mitchell, Brian H. Kim, and Leaetta Hough. “Evidence for Response Bias as a Source of Error Variance in Applied Assessment.” Psychological Bulletin 136.3 (2010): 450–470.

    DOI: 10.1037/a0019216Save Citation »Export Citation » Share Citation »

    Reviews attempts in past studies to use measures of faking (and other response biases) as either moderators or suppressors to enhance the accuracy of assessments. Discusses concerns with making such corrections.

    Find this resource:

  • Morgeson, Frederick P., Michael A. Campion, Robert L. Dipboye, John R. Hollenbeck, Kevin Murphy, and Neal Schmitt. “Reconsidering the Use of Personality Tests in Personnel Selection Contexts.” Personnel Psychology 60.3 (2007): 683–729.

    DOI: 10.1111/j.1744-6570.2007.00089.xSave Citation »Export Citation » Share Citation »

    Covers a discussion among experts regarding the state of the literature at that point in time. Some emphasis is placed on the relatively low criterion-related validities of personality tests in selection contexts, which implies a limit on the potential impact of faking on validity.

    Find this resource:

  • Ones, Deniz S., Stephan Dilchert, Chockalingam Viswesvaran, and Timothy A. Judge. “In Support of Personality Assessment in Organizational Settings.” Personnel Psychology 60 (2007): 995–1027.

    DOI: 10.1111/j.1744-6570.2007.00099.xSave Citation »Export Citation » Share Citation »

    Reviews previous meta-analyses indicating personality measures that demonstrate useful levels of criterion-related validity for job performance. Also suggests that response distortion does not completely undermine the psychometric properties of personality measures.

    Find this resource:

  • Tett, Robert P., and Neil D. Christiansen. “Personality Tests at the Crossroads: A Response to Morgeson, Campion, Dipboye, Hollenbeck, Murphy, and Schmitt.” Personnel Psychology 60.4 (2007): 967–993.

    DOI: 10.1111/j.1744-6570.2007.00098.xSave Citation »Export Citation » Share Citation »

    Discusses criterion-related validity evidence, arguing that personality tests can show useful levels of validity when examined appropriately. Also indicates that faking does influence validity but personality measures can still be useful in selection.

    Find this resource:

Definitions

A number of terms have been used in work on faking. For example, social desirability, impression management, and response distortion are often used in this context. These terms also appear in other research areas, and when used in these areas, they can have somewhat different meanings. Within faking research, however, these terms typically have similar meanings although precise definitions have varied across authors. In general, these terms refer to intentional distortion of responses on selection measures. Paulhus 1984 introduces a distinction between socially desirable responding that is self-deceptive versus socially desirable responding that is conscious. Paulhus and Trapnell 2008 provides an updated view on this. Sackett 2011 discusses the definition of faking, suggesting observed personality item scores have six systematic variance components to clarify the conceptualization of faking.

  • Paulhus, Delroy L. “Two-Component Models of Socially Desirable Responding.” Journal of Personality and Social Psychology 46.3 (1984): 598–609.

    DOI: 10.1037/0022-3514.46.3.598Save Citation »Export Citation » Share Citation »

    Discusses the distinction between self-deception and impression management. Self-deception is when the respondent believes his/her positive self-reports. Impression management is when the respondent consciously fakes. The focus in selection is usually on impression management.

    Find this resource:

  • Paulhus, Delroy L., and Paul D. Trapnell. “Self-Presentation of Personality: An Agency-Communion Framework.” In Handbook of Personality: Theory and Research. 3d ed. Edited by Oliver P. John, Richard W. Robins, and Lawrence A. Pervin, 492–517. New York: Guilford, 2008.

    Save Citation »Export Citation » Share Citation »

    Proposes substantial revisions to theories of socially desirable responding that were established in Paulhus 1984. A distinction between the crafting of agentic versus communal images replaces the distinction between impression management and self-deception.

    Find this resource:

  • Sackett, Paul R. “Faking in Personality Assessments: Where Do We Stand?” In New Perspectives on Faking in Personality Assessments. Edited by Matthias Ziegler, Carolyn MacCann, and Richard D. Roberts, 330–344. New York: Oxford University Press, 2011.

    DOI: 10.1093/acprof:oso/9780195387476.003.0091Save Citation »Export Citation » Share Citation »

    Includes a useful discussion of faking definitions. Suggests faking involves situation-specific intentional distortion of responses.

    Find this resource:

Consequences of Faking

Research has also examined outcomes associated with faking in selection contexts for measures, individuals, and organizations. For example, effects on construct validity, hiring decisions, and criterion-related validity have been examined. Some disagreement exists regarding the nature and severity of these consequences, with some arguing faking can have notable negative effects and others suggesting this may not be the case. It seems likely that faking can have detrimental effects, but that these are conditional on a number of factors. Thus, additional research examining the conditions under which faking is more versus less problematic may be useful in clarifying this issue.

Works Consistent with Notable Faking Effects

Several studies suggest that faking can produce negative consequences for measures, individuals, and organizations. This work has examined several implications, including for construct validity, criterion-related validity, and selection decisions. Schmit and Ryan 1993; Stark, et al. 2001; and Holden 2007 focus on construct validity. Komar, et al. 2008; Lee, et al. 2017; and Salgado 2016 address criterion-related validity. Dudley, et al. 2005; Rosse, et al. 1998; and Mueller-Hanson, et al. 2003 examine hiring decisions.

  • Dudley, Nicole M., Lynn A. McFarland, Scott A. Goodman, Steven T. Hunt, and Eric J. Sydell. “Racial Differences in Socially Desirable Responding in Selection Contexts: Magnitude and Consequences.” Journal of Personality Assessment 85.1 (2005): 50–64.

    DOI: 10.1207/s15327752jpa8501_05Save Citation »Export Citation » Share Citation »

    Examines differences in scores on socially desirable responding scales across racial groups. These researchers also explore how differences might impact hiring rates and adverse impact when those scales are used to detect fakers or correct faked scores.

    Find this resource:

  • Holden, Ronald R. “Socially Desirable Responding Does Moderate Personality Scale Validity Both in Experimental and in Non-experimental Contexts.” Canadian Journal of Behavioural Science 39 (2007): 184–201.

    DOI: 10.1037/cjbs2007015Save Citation »Export Citation » Share Citation »

    Discusses the extent to which experimentally induced faking and natural faking affect the validity of self-report personality scales. The author argues that concerns regarding faking should not be dismissed.

    Find this resource:

  • Komar, Shawn, Douglas J. Brown, Jennifer A. Komar, and Chet Robie. “Faking and the Validity of Conscientiousness: A Monte Carlo Investigation.” Journal of Applied Psychology 93.1 (2008): 140–154.

    DOI: 10.1037/0021-9010.93.1.140Save Citation »Export Citation » Share Citation »

    This research uses simulated data to examine the potential effects of faking on the criterion-related validity of a personality measure. Findings indicate that criterion-related validity can be negatively affected although the effects may not be large in some circumstances.

    Find this resource:

  • Lee, HyeSun, Weldon Z. Smith, and Kurt F. Geisinger. “Faking under a Nonlinear Relationship between Personality Assessment Scores and Job Performance.” International Journal of Selection and Assessment 25.3 (2017): 284–298.

    DOI: 10.1111/ijsa.12180Save Citation »Export Citation » Share Citation »

    Examines the impact of faking when the relationship between personality scores and job performance is non-linear given that research indicates this relationship may not be linear in some situations. The results suggest faking on personality assessments led to a substantial decrease in the prediction of job performance.

    Find this resource:

  • Mueller-Hanson, Rose, Eric D. Heggestad, and George C. Thornton III. “Faking and Selection: Considering the Use of Personality from Select-In and Select-Out Perspectives.” Journal of Applied Psychology 88.2 (2003): 348–355.

    DOI: 10.1037/0021-9010.88.2.348Save Citation »Export Citation » Share Citation »

    Provides evidence that faking may affect criterion-related validity and the quality of selection decisions. The authors suggest that implementing personality measures in a select-out fashion (removing applicants who do not meet a minimum score) may be useful.

    Find this resource:

  • Rosse, Joseph G., Mary D. Stecher, Janice L. Miller, and Robert A. Levin. “The Impact of Response Distortion on Preemployment Personality Testing and Hiring Decisions.” Journal of Applied Psychology 83.4 (1998): 634–644.

    DOI: 10.1037/0021-9010.83.4.634Save Citation »Export Citation » Share Citation »

    This study examines faking in job applicants and job incumbents. Results indicate that there is evidence of greater faking for job applicants, there is notable variance in faking, and faking could have a significant effect on who is hired when the selection ratio is less than 0.5.

    Find this resource:

  • Salgado, Jesús F. “A Theoretical Model of Psychometric Effects of Faking on Assessment Procedures: Empirical Findings and Implications for Personality at Work; A Theoretical Model of Faking Psychometric Effects.” International Journal of Selection and Assessment 24.3 (2016): 209–228.

    DOI: 10.1111/ijsa.12142Save Citation »Export Citation » Share Citation »

    Investigates the psychometric effects of faking during the assessment process. The findings suggest faking increases means (in the case of faking good) but decreases standard deviations (causing range restriction), reliability, and validity.

    Find this resource:

  • Schmit, Mark J., and Ann Marie Ryan. “The Big Five in Personnel Selection: Factor Structure in Applicant and Nonapplicant Populations.” Journal of Applied Psychology 78.6 (1993): 966–974.

    DOI: 10.1037/0021-9010.78.6.966Save Citation »Export Citation » Share Citation »

    This study examines whether similar factor structures hold for personality assessments of applicant and nonapplicant samples. Results indicate the five factor structure model fit data from a student sample but not from an applicant sample, implying that faking may influence construct validity.

    Find this resource:

  • Stark, Stephen, Oleksandr S. Chernyshenko, Kim-Yin Chan, Wayne C. Lee, and Fritz Drasgow. “Effects of the Testing Situation on Item Responding: Cause for Concern.” Journal of Applied Psychology 86.5 (2001): 943–953.

    DOI: 10.1037/0021-9010.86.5.943Save Citation »Export Citation » Share Citation »

    Addresses the implications of faking for construct validity using item response theory methods. Results support the notion that faking can have negative effects on construct validity.

    Find this resource:

Works Inconsistent with Notable Faking Effects

Some findings have suggested that faking may not have significant negative consequences in selection settings. These studies typically operationalize faking using social desirability measures. Ones, et al. 1996 involves a meta-analysis of research on social desirability. Barrick and Mount 1996 examines implications for criterion-related validity. Ellingson, et al. 2001 addresses construct validity.

  • Barrick, Murray R., and Michael K. Mount. “Effects of Impression Management and Self-Deception on the Predictive Validity of Personality Constructs.” Journal of Applied Psychology 81.3 (1996): 261–272.

    DOI: 10.1037/0021-9010.81.3.261Save Citation »Export Citation » Share Citation »

    Provides evidence that self-deception and impression management do not attenuate the predictive validities of conscientiousness or emotional stability.

    Find this resource:

  • Ellingson, Jill E., D. Brent Smith, and Paul R. Sackett. “Investigating the Influence of Social Desirability on Personality Factor Structure.” Journal of Applied Psychology 86.1 (2001): 122–133.

    DOI: 10.1037/0021-9010.86.1.122Save Citation »Export Citation » Share Citation »

    Using multiple data sets, the authors found that socially desirable responding does not have a noticeable impact on the factor structure of personality tests.

    Find this resource:

  • Ones, Deniz S., Chockalingam Viswesvaran, and Angelika D. Reiss. “The Role of Social Desirability in Personality Testing for Personnel Selection: The Red Herring.” Journal of Applied Psychology 81.6 (1996): 660–679.

    DOI: 10.1037/0021-9010.81.6.660Save Citation »Export Citation » Share Citation »

    These authors argue that social desirability is less prevalent than many previously believed and, thus, a minor problem. They also advance the notion that impression management represents a meaningful construct relevant to performing well in many jobs, which organizations might want to include in selection criteria rather than eliminate.

    Find this resource:

Faking Ability

Faking ability can be viewed from two perspectives. First, a basic question regarding faking is the extent to which, in general, applicants are capable of faking selection measures successfully. Viswesvaran and Ones 1999; Martin, et al. 2002; Hough, et al. 1990; Doll 1971; and Klehe, et al. 2012 provide evidence relevant to this issue. Second, studies have also demonstrated individual differences in the amount that people can and do fake, with some pointing out that certain individuals even perform worse on assessments instead of enhancing their scores beyond honest levels. However, only a few studies have focused on potential determinants of an “ability” to fake, which would allow more successful response distortion. Snell, et al. 1999 addresses this issue.

  • Doll, Richard E. “Item Susceptibility to Attempted Faking as Related to Item Characteristic and Adopted Fake Set.” Journal of Psychology 77.1 (1971): 9–16.

    DOI: 10.1080/00223980.1971.9916848Save Citation »Export Citation » Share Citation »

    One of the first studies to examine faking performed in a more realistic, “subtle” manner, in which people received experimental instructions to avoid detection by a lie scale or to provide answers that could be defended later in interviews, in addition to “look as good as possible.”

    Find this resource:

  • Hough, Leatta M., Newell K. Eaton, Marvin D. Dunnette, John D. Kamp, and Rodney A. McCloy. “Criterion-Related Validities of Personality Constructs and the Effect of Response Distortion on Those Validities.” Journal of Applied Psychology 75.5 (1990): 581–595.

    DOI: 10.1037/0021-9010.75.5.581Save Citation »Export Citation » Share Citation »

    Provides estimates of the amount of faking performed by a military sample.

    Find this resource:

  • Klehe, Ute-Christine, Martin Kleinmann, Thomas Hartstein, et al. “Responding to Personality Tests in a Selection Context: The Role of Ability to Identify Criteria and the Ideal-Employee Factor.” Human Performance 25.4 (2012): 273–302.

    DOI: 10.1080/08959285.2012.703733Save Citation »Export Citation » Share Citation »

    An empirical examination of an ability to identify criteria (ATIC) that allows people to select responses reflecting an ideal employee.

    Find this resource:

  • Martin, Beth A., C. C. Bowen, and Steven T. Hunt. “How Effective Are People at Faking Personality Questionnaires?” Personality and Individual Differences 32.2 (2002): 247–256.

    DOI: 10.1016/S0191-8869(01)00021-6Save Citation »Export Citation » Share Citation »

    Provides a demonstration that people could fake an “ideal” personality profile, even for a job never held before.

    Find this resource:

  • Snell, Andrea F., Eric J. Sydell, and Sarah B. Lueke. “Towards a Theory of Applicant Faking: Integrating Studies of Deception.” Human Resource Management Review 9.2 (1999): 219–242.

    DOI: 10.1016/S1053-4822(99)00019-4Save Citation »Export Citation » Share Citation »

    In their theoretical model, these authors highlight an “ability to fake” as a major factor determining the chance of faking successfully, in addition to one’s motivation. They also propose that this ability is determined by three factors: disposition, past experience, and characteristics of the test being faked.

    Find this resource:

  • Viswesvaran, Chockalingam, and Deniz S. Ones. “Meta-analyses of Fakability Estimates: Implications for Personality Measurement.” Educational and Psychological Measurement 59.2 (1999): 197–210.

    DOI: 10.1177/00131649921969802Save Citation »Export Citation » Share Citation »

    In a large-scale review of the literature on faking personality tests, the authors found that individuals can fake when instructed to do so. However, the type of design and faking instructions used played a large role in the amount of faking that occurred.

    Find this resource:

Faking Prevalence

Faking prevalence refers to the extent to which applicants actually do fake during the selection process. Studies on this issue typically attempt to identify the percentage of individuals in a sample who have likely faked their responses. Griffith and Converse 2012 provides a review of this research. Donovan, et al. 2014 and Arthur, et al. 2010 provide evidence suggesting that faking is relatively common.

  • Arthur, Winfred, Jr., Ryan M. Glaze, Anton J. Villado, and Jason E. Taylor. “The Magnitude and Extent of Cheating and Response Distortion Effects on Unproctored Internet‐Based Tests of Cognitive Ability and Personality.” International Journal of Selection and Assessment 18.1 (2010): 1–16.

    DOI: 10.1111/j.1468-2389.2010.00476.xSave Citation »Export Citation » Share Citation »

    Examines whether signs of faking and cheating were detectable on unproctored Internet-based tests, as compared to similar tests in proctored environments. Results show few differences, implying that unproctored testing is not associated with greater faking.

    Find this resource:

  • Donovan, John J., Stephen A. Dwight, and Dan Schneider. “The Impact of Applicant Faking on Selection Measures, Hiring Decisions, and Employee Performance.” Journal of Business and Psychology 29.3 (2014): 479–493.

    DOI: 10.1007/s10869-013-9318-5Save Citation »Export Citation » Share Citation »

    A recent study using a within-subjects design to identify faking by actual job applicants that suggests a considerable proportion of applicants will fake.

    Find this resource:

  • Griffith, Richard L., and Patrick D. Converse. “The Rules of Evidence and the Prevalence of Applicant Faking.” In New Perspectives on Faking in Personality Assessments. Edited by Matthias Ziegler, Carolyn MacCann, and Richard D. Roberts, 34–52. New York: Oxford University Press, 2012.

    Save Citation »Export Citation » Share Citation »

    This chapter provides a discussion of the faking prevalence question and a summary of evidence. Suggests that roughly 30 percent of applicants (±10 percent) engage in faking behavior.

    Find this resource:

Models of Faking

Given some evidence that applicants can and do fake in selection settings, several models have been developed in an attempt to explain faking behavior. Snell, et al. 1999; McFarland and Ryan 2006; and Holden, et al. 1992 represent important examples of models in this area. Marcus 2009, Kim 2011, and Goffin and Boyd 2009 integrate previous faking models. Johnson and Hogan 2006 presents a somewhat different view on faking. Ellingson and McFarland 2011 conceptualizes faking from a motivation perspective. Roulin, et al. 2016 proposes a dynamic model. Finally, Griffith and Peterson 2011, a special issue of the journal Human Performance, deals with theoretical perspectives on faking.

  • Ellingson, Jill E., and Lynn A. McFarland. “Understanding Faking Behavior through the Lens of Motivation: An Application of VIE Theory.” Human Performance 24.4 (2011): 322–337.

    DOI: 10.1080/08959285.2011.597477Save Citation »Export Citation » Share Citation »

    Proposes a parsimonious conceptual framework to explain applicant faking behaviors through a theory of motivation.

    Find this resource:

  • Goffin, Richard D., and Allison C. Boyd. “Faking and Personality Assessment in Personnel Selection: Advancing Models of Faking.” Canadian Psychology 50.3 (2009): 151–160.

    DOI: 10.1037/a0015946Save Citation »Export Citation » Share Citation »

    Proposes a cognitive decision-tree based on an integration of past models to explain the process of how one becomes motivated to fake. This model emphasizes efficacy beliefs more than prior models of faking.

    Find this resource:

  • Griffith, Richard L., and Mitchell H. Peterson, eds. Special Issue: Human Performance Uncovering the Nature of Applicant Faking Behavior: A Presentation of Theoretical Perspectives. Human Performance 24.4 (2011).

    Save Citation »Export Citation » Share Citation »

    This special issue of the journal Human Performance presents several theoretical perspectives on faking. In the introduction to the issue, the authors discuss theory in this research area and briefly introduce the articles.

    Find this resource:

  • Holden, Ronald R., Daryl G. Kroner, G. Cynthia Fekken, and Suzanne M. Popham. “A Model of Personality Test Item Response Dissimulation.” Journal of Personality and Social Psychology 63.2 (1992): 272–279.

    DOI: 10.1037/0022-3514.63.2.272Save Citation »Export Citation » Share Citation »

    Proposes a model conceptualizing the construction of a faked personality test response as resulting from a cognitive decision-making process that compares each test item with schemas about oneself and about the relevant situation. If true, this implies that honest responders generally need to make fewer judgments than people who dissimulate, thus responding more quickly.

    Find this resource:

  • Johnson, John A., and Robert Hogan. “A Socioanalytic View of Faking.” In A Closer Examination of Applicant Faking Behavior. Edited by Richard L. Griffith and Mitchell H. Peterson, 209–231. Greenwich, CT: Information Age, 2006.

    Save Citation »Export Citation » Share Citation »

    Presents a different perspective on the issue of faking involving the notion that responding to personality items is a form of social interaction rather than objective communication.

    Find this resource:

  • Kim, Brian H. “Deception and Applicant Faking: Putting the Pieces Together.” International Review of Industrial and Organizational Psychology 26 (2011): 181–217.

    Save Citation »Export Citation » Share Citation »

    Integrates past models of faking with theories of deception in general communications, lie detection, and workplace deviance (counterproductive work behaviors), and specifies cognitive-behavioral strategies to explain how responses might be distorted (from honest ones) or fabricated.

    Find this resource:

  • Marcus, Bernd. “‘Faking’ from the Applicant’s Perspective: A Theory of Self-Presentation in Personnel Selection Settings.” International Journal of Selection and Assessment 17.4 (2009): 417–430.

    DOI: 10.1111/j.1468-2389.2009.00483.xSave Citation »Export Citation » Share Citation »

    Offers a process model of self-presentation that combines the influence of dispositional and situational factors during recruitment and selection. The model attempts to show how self-presentation motivation and skills influence test scores and, later, the validity of hiring decisions.

    Find this resource:

  • McFarland, Lynn A., and Anne Marie Ryan. “Toward an Integrated Model of Applicant Faking Behavior.” Journal of Applied Social Psychology 36.4 (2006): 979–1016.

    DOI: 10.1111/j.0021-9029.2006.00052.xSave Citation »Export Citation » Share Citation »

    Updated from their work in 2000, these authors propose one of the first models of the process of faking.

    Find this resource:

  • Roulin, Nicolas, Franciska Krings, and Steve Binggeli. “A Dynamic Model of Applicant Faking.” Organizational Psychology Review 6.2 (2016): 145–170.

    DOI: 10.1177/2041386615580875Save Citation »Export Citation » Share Citation »

    Proposes a dynamic model of faking based on signaling theory. Argues that faking is driven by applicants’ and organizations’ adaptations in a competitive environment.

    Find this resource:

  • Snell, Andrea F., Eric J. Sydell, and Sarah B. Lueke. “Towards a Theory of Applicant Faking: Integrating Studies of Deception.” Human Resource Management Review 9.2 (1999): 219–242.

    DOI: 10.1016/S1053-4822(99)00019-4Save Citation »Export Citation » Share Citation »

    One of the earliest models of faking, this theory focuses on motivational and ability antecedents.

    Find this resource:

Coaching

Although this literature has been plagued with vague and overly broad definitions of test coaching, various studies have shown that advising or instructing respondents to fake in specific ways can introduce response distortion in assessments that may not occur to people naturally. Forms of coaching that relate to faking have varied considerably, ranging from small hints about the way an assessment or faking detection scale is designed to outright cheating with a list of test questions and correct/best responses. Cullen, et al. 2006 focuses on situational judgment tests. Berry, et al. 2007 focuses on integrity tests. Miller and Barrett 2008 focuses on personality tests. Maurer and Solamon 2006 focuses on interviews, and Whyte 2002 (originally published in 1956) represents an early source relevant to this issue.

  • Berry, Christopher M., Paul R. Sackett, and Shelly Wiemann. “A Review of Recent Developments in Integrity Test Research.” Personnel Psychology 60.2 (2007): 271–301.

    DOI: 10.1111/j.1744-6570.2007.00074.xSave Citation »Export Citation » Share Citation »

    Provides an overview of the research on integrity testing, including some major studies of how faking and coaching affect test scores.

    Find this resource:

  • Cullen, Michael J., Paul R. Sackett, and Filip Lievens. “Threats to the Operational Use of Situational Judgment Tests in the College Admission Process.” International Journal of Selection and Assessment 14.2 (2006): 142–155.

    DOI: 10.1111/j.1468-2389.2006.00340.xSave Citation »Export Citation » Share Citation »

    Shows that some simple coaching strategies can exploit a weakness in the scoring system of some situational judgment tests to increase test takers’ scores.

    Find this resource:

  • Maurer, Todd J., and Jerry M. Solamon. “The Science and Practice of a Structured Employment Interview Coaching Program.” Personnel Psychology 59.2 (2006): 433–456.

    DOI: 10.1111/j.1744-6570.2006.00797.xSave Citation »Export Citation » Share Citation »

    Although not focused on matters of faking, the authors discuss the development of a test coaching program that increased the transparency of an employment interview process and offered preparation strategies. Potential decreases to the psychometric properties of the interview are also considered.

    Find this resource:

  • Miller, Corey E., and Gerald V. Barrett. “The Coachability and Fakability of Personality-Based Selection Tests Used for Police Selection.” Public Personnel Management 37.3 (2008): 339–351.

    DOI: 10.1177/009102600803700306Save Citation »Export Citation » Share Citation »

    Examines whether people can be coached to alter personality test scores successfully, but also reviews concerns with certain types of test coaching that have been used in practice.

    Find this resource:

  • Whyte, William H., Jr. The Organization Man. Philadelphia: University of Pennsylvania Press, 2002.

    Save Citation »Export Citation » Share Citation »

    Originally published in 1956, a historical source known for providing specific advice about how to alter true responses to personality test items. Interestingly, the author emphasizes somewhat sophisticated strategies, such as aiming to produce only moderately desirable scores that would allow one to “pass” onto the next stage of selection, instead of using faking to excel on the test.

    Find this resource:

Faking Across Cultures

Recent research has focused on faking across cultures, examining the extent to which national culture might affect faking behaviors. Fell and König 2016 is the first study using large-scale cross-cultural data to demonstrate relationships between culture and faking in hiring situations. Fell and König 2018 is another cross-cultural study providing evidence of relationships between culture and students’ academic faking.

Preventing Faking

One major approach to addressing faking in selection contexts is to attempt to prevent or reduce it. Some of the techniques in this category focus on the motivation or intention to fake whereas others focus on the ability to fake. Most approaches that have been proposed have been tested in only a few studies. Evidence on many of the techniques is largely mixed and each is associated with different issues regarding use in practice suggesting further research is necessary.

Warnings

Warnings are statements included in measure instructions that warn against faking. Often these statements include detection information (indicating that faking can be identified), consequence information (indicating the consequences of faking), or both. Dwight and Donovan 2003 provides evidence on the core issue of effectiveness in reducing faking. McFarland 2003; Vasilopoulos, et al. 2005; and Robson, et al. 2008 address other implications of warnings, including applicant reactions, cognitive processes, and convergent validity. Fan, et al. 2012 and Pace and Borman 2006 discuss variations on the traditional approaches to warnings. Burns, et al. 2015 examines the effects of different types of warnings during online assessments.

  • Burns, Gary N., Jenna N. Fillipowski, Megan B. Morris, and Elizabeth A. Shoda. “Impact of Electronic Warnings on Online Personality Scores and Test-Taker Reactions in an Applicant Simulation.” Computers in Human Behavior 48 (2015): 163–172.

    DOI: 10.1016/j.chb.2015.01.051Save Citation »Export Citation » Share Citation »

    Examines the effects of different types of electronic warnings on personality scores and test takers’ reactions during an online assessment. Results suggest that negatively worded warnings and accusations were more effective in preventing faking.

    Find this resource:

  • Dwight, Stephen A., and John J. Donovan. “Do Warnings Not to Fake Reduce Faking?” Human Performance 16.1 (2003): 1–23.

    DOI: 10.1207/S15327043HUP1601_1Save Citation »Export Citation » Share Citation »

    From a review of the sparse literature to that point in time, the authors reveal that warnings given to job applicants generally reduce faking by only a small amount. A follow-up study suggests that the most effective types of warnings mention both a method that will be used to detect faking and a corresponding punishment.

    Find this resource:

  • Fan, Jinyan, Dingguo Gao, Sarah A. Carroll, Felix J. Lopez, T. Siva Tian, and Hui Meng. “Testing the Efficacy of a New Procedure for Reducing Faking on Personality Tests within Selection Contexts.” Journal of Applied Psychology 97.4 (2012): 866–880.

    DOI: 10.1037/a0026655Save Citation »Export Citation » Share Citation »

    Explores a “test-warning-retest” method for deterring faking by giving warnings early in a test when bogus statements or impression management scales detect patterns representing distorted responses. Discusses issues relevant to implementing this method in practice.

    Find this resource:

  • McFarland, Lynn A. “Warning against Faking on a Personality Test: Effects on Applicant Reactions and Personality Test Scores.” International Journal of Selection and Assessment 11.4 (2003): 265–276.

    DOI: 10.1111/j.0965-075X.2003.00250.xSave Citation »Export Citation » Share Citation »

    Examines how warnings may influence general reactions to the testing process (perceived validity) and judgments about the organization in general, beyond one’s motivation to fake test responses.

    Find this resource:

  • Pace, Victoria L., and Walter C. Borman. “The Use of Warnings to Discourage Faking on Noncognitive Inventories.” In A Closer Examination of Applicant Faking Behavior. Edited by Richard L. Griffith and Mitchell H. Peterson, 283–304. Greenwich, CT: Information Age, 2006.

    Save Citation »Export Citation » Share Citation »

    This chapter reviews research on warnings and outlines several warning types, including reasoning (indicating honest responses will lead to job fit), educational (explaining the reasons for testing), and moral conviction (increasing the salience of moral conviction).

    Find this resource:

  • Robson, Sean. M., Andrew Jones, and Joseph Abraham. “Personality, Faking, and Convergent Validity: A Warning Concerning Warning Statements.” Human Performance 21.1 (2008): 89–106.

    DOI: 10.1080/08959280701522155Save Citation »Export Citation » Share Citation »

    Examines the effect of warning statements on convergent validity in addition to scale scores.

    Find this resource:

  • Vasilopoulos, Nicholas L., Jeffrey M. Cucina, and Julia M. McElreath. “Do Warnings of Response Verification Moderate the Relationship between Personality and Cognitive Ability?” Journal of Applied Psychology 90.2 (2005): 306–322.

    DOI: 10.1037/0021-9010.90.2.306Save Citation »Export Citation » Share Citation »

    Provides findings about how warnings alter the cognitive processes involved in faking responses, including the time needed to respond (response time latencies).

    Find this resource:

Forced-Choice Measures

Forced-choice measures are designed to make faking more difficult by requiring applicants to make choices among two or more statements that are balanced in terms of social desirability. For example, a forced-choice item might consist of two statements—(a) make plans and stick to them and (b) catch on to things quickly—and respondents are asked to select the one statement that is most like them. If these statements are balanced appropriately, faking should be reduced because applicants will have more difficulty responding on the basis of statement desirability. Converse, et al. 2006 provides an overview of work on forced-choice measures. A potential concern with forced-choice measures is that they can have limitations in terms of psychometric properties. Hicks 1970 and Meade 2004 discuss these issues. Christiansen, et al. 2005 and Heggestad, et al. 2006 present evidence related to reducing faking and retaining trait-level information under faking conditions, with the former article’s results leading to more positive conclusions. Stark, et al. 2012 and Brown and Maydeu-Olivares 2013 focus on using item response theory (IRT) in the context of forced-choice measures. Pavlov, et al. 2019 provides evidence suggesting the forced-choice format performs similarly to the Likert scale format.

  • Brown, Anna, and Alberto Maydeu-Olivares. “How IRT Can Solve Problems of Ipsative Data in Forced-Choice Questionnaires.” Psychological Methods 18.1 (2013): 36–52.

    DOI: 10.1037/a0030641Save Citation »Export Citation » Share Citation »

    Discusses using IRT models in the context of forced-choice measures. Notes that traditional scoring of these measures results in ipsative data with various limitations. Argues that using IRT modeling instead can produce scores that do not have these limitations.

    Find this resource:

  • Christiansen, Neil D., Gary N. Burns, and George E. Montgomery. “Reconsidering Forced-Choice Item Formats for Applicant Personality Assessment.” Human Performance 18.3 (2005): 267–307.

    DOI: 10.1207/s15327043hup1803_4Save Citation »Export Citation » Share Citation »

    Provides a useful discussion of forced-choice measures, including responses to common criticisms of these instruments. Presents results suggesting that forced-choice measures reduce distortion and maintain validity better than normative measures under faking conditions.

    Find this resource:

  • Converse, Patrick D., Frederick L. Oswald, Anna Imus, Cynthia Hedricks, Radha Roy, and Hilary Butera. “Forcing Choices in Personality Measurement: Benefits and Limitations.” In A Closer Examination of Applicant Faking Behavior. Edited by Richard L. Griffith and Mitchell H. Peterson, 263–282. Greenwich, CT: Information Age, 2006.

    Save Citation »Export Citation » Share Citation »

    Provides a general overview of research on forced-choice measures, including potential advantages and disadvantages. Focuses on five areas: item format, psychometric properties, applicant faking, criterion-related validity, and applicant reactions.

    Find this resource:

  • Heggestad, Eric D., Morgan Morrison, Charlie L. Reeve, and Rodney A. McCloy. “Forced-Choice Assessments of Personality for Selection: Evaluating Issues of Normative Assessment and Faking Resistance.” Journal of Applied Psychology 91.1 (2006): 9–24.

    DOI: 10.1037/0021-9010.91.1.9Save Citation »Export Citation » Share Citation »

    Compares the forced-choice format and the Likert-type format in terms of faking resistance. Findings indicate that, although the forced-choice format is more resistant to faking at the group level of analysis (in terms of mean differences), this format does not provide better assessments of individual trait standing than the Likert format under faking conditions.

    Find this resource:

  • Hicks, Lou E. “Some Properties of Ipsative, Normative, and Forced-Choice Normative Measures.” Psychological Bulletin 74.3 (1970): 167–184.

    DOI: 10.1037/h0029780Save Citation »Export Citation » Share Citation »

    Discusses psychometric issues that can be relevant to forced-choice measures, focusing on the concept of ipsative measurement. Contrasts absolute measurement that yields inter-individual score differences with ipsative measurement that yields intra-individual score differences. Highlights limitations associated with purely ipsative measures.

    Find this resource:

  • Meade, Adam W. “Psychometric Problems and Issues Involved with Creating and Using Ipsative Measures for Selection.” Journal of Occupational and Organizational Psychology 77.4 (2004): 531–552.

    DOI: 10.1348/0963179042596504Save Citation »Export Citation » Share Citation »

    Addresses issues with ipsative measures in selection contexts. Highlights interdependencies in forced-choice ipsative data. Suggests that ipsative measures have important limitations for employee selection purposes.

    Find this resource:

  • Pavlov, Goran, Alberto Maydeu-Olivares, and Amanda J. Fairchild. “Effects of Applicant Faking on Forced-Choice and Likert Scores.” Organizational Research Methods 22.3 (2019): 710–739.

    DOI: 10.1177/1094428117753683Save Citation »Export Citation » Share Citation »

    Examines whether forced-choice measures can mitigate the effects of applicant faking relative to Likert scale methods. The results suggest that the forced-choice format performs similarly to the Likert scale format.

    Find this resource:

  • Stark, Stephen, Oleksandr S. Chernyshenko, and Fritz Drasgow. “Constructing Fake-Resistant Personality Tests Using Item Response Theory: High Stakes Personality Testing with Multidimensional Pairwise Preferences.” In New Perspectives on Faking in Personality Assessments. Edited by Matthias Ziegler, Carolyn MacCann, and Richard D. Roberts, 214–239. New York: Oxford University Press, 2012.

    Save Citation »Export Citation » Share Citation »

    Discusses IRT models related to forced-choice measures. Reviews an IRT method for developing forced-choice measures that are intended to be fake resistant. Discusses evidence for this method from simulations and empirical findings.

    Find this resource:

Reducing Transparency

Reducing transparency is an approach that attempts to decrease faking by making it more difficult to identify the nature of the items or purpose of the measure. The notion is that if applicants can easily identify the target characteristics and scoring, they can more easily fake their responses. Thus, reducing transparency should also reduce faking. Arthur, et al. 2014; McFarland, et al. 2002; Cucina, et al. 2019; Kluger, et al. 1991; and Kolk, et al. 2003 examine this approach for situational judgment tests, personality measures, biodata tests, and assessment centers.

  • Arthur, Winfred, Jr., Ryan M. Glaze, Steven M. Jarrett, Craig D. White, Ira Schurig, and Jason E. Taylor. “Comparative Evaluation of Three Situational Judgment Test Response Formats in Terms of Construct-Related Validity, Subgroup Differences, and Susceptibility to Response Distortion.” Journal of Applied Psychology 99.3 (2014): 535–545.

    DOI: 10.1037/a0035788Save Citation »Export Citation » Share Citation »

    Compares the amount of response distortion observed for different response formats to a situational judgment test, where the formats differed in transparency.

    Find this resource:

  • Cucina, Jeffrey M., Nicholas L. Vasilopoulos, Chihwei Su, Henry H. Busciglio, Irina Cozma, Arwen H. DeCostanza, Nicholas R. Martin, and Megan N. Shaw. “The Effects of Empirical Keying of Personality Measures on Faking and Criterion-Related Validity.” Journal of Business and Psychology 34.3 (2019): 337–356.

    DOI: 10.1007/s10869-018-9544-ySave Citation »Export Citation » Share Citation »

    Investigates the effects of empirical keying on faking and criterion-related validity. Results suggest empirical keying increases criterion-related validity for academic, training, and job performance. It also reduces the effects of faking.

    Find this resource:

  • Kluger, Avraham N., Richard R. Reilly, and Craig J. Russell. “Faking Biodata Tests: Are Option-Keyed Instruments More Resistant?” Journal of Applied Psychology 76.6 (1991): 889–896.

    DOI: 10.1037/0021-9010.76.6.889Save Citation »Export Citation » Share Citation »

    Examines how reducing the transparency of a test’s scoring systems (i.e., option-keying versus item-keying) might reduce faking or alter a test’s validity.

    Find this resource:

  • Kolk, Nanja J., Marise Ph. Born, and Henk van der Flier. “The Transparent Assessment Centre: The Effects of Revealing Dimensions to Candidates.” Applied Psychology: An International Review 52.4 (2003): 648–668.

    DOI: 10.1111/1464-0597.00156Save Citation »Export Citation » Share Citation »

    Tests the effect of revealing (or concealing) the dimensions measured by an assessment center on scores achieved by participants.

    Find this resource:

  • McFarland, Lynn A., Ann Marie Ryan, and Aleksander Ellis. “Item Placement on a Personality Measure: Effects on Faking Behavior and Test Measurement Properties.” Journal of Personality Assessment 78.2 (2002): 348–369.

    DOI: 10.1207/S15327752JPA7802_09Save Citation »Export Citation » Share Citation »

    Examines whether or not presenting items randomly instead of grouped together by scale would mask the purpose of an assessment and reduce faking.

    Find this resource:

Time Constraints

Introducing time constraints involves limiting the amount of time test-takers have to respond. The logic behind this technique is that faking may take additional time and cognitive resources and therefore time limits may reduce the ability of applicants to fake effectively. This approach has received limited attention. Holden, et al. 2001 and Komar, et al. 2010 present mixed findings for this technique.

  • Holden, Ronald R., Lisa L. Wood, and Leah Tomashewski. “Do Response Time Limitations Counteract the Effect of Faking on Personality Inventory Validity?” Journal of Personality and Social Psychology 81.1 (2001): 160–169.

    DOI: 10.1037/0022-3514.81.1.160Save Citation »Export Citation » Share Citation »

    Explores the possibility that restricting response time might prevent faking, but fails to find supporting evidence.

    Find this resource:

  • Komar, Shawn, Jennifer A. Komar, Chet Robie, and Simon Taggar. “Speeding Personality Measures to Reduce Faking: A Self-Regulatory Model.” Journal of Personnel Psychology 9.3 (2010): 126–137.

    DOI: 10.1027/1866-5888/a000016Save Citation »Export Citation » Share Citation »

    Shows that restricting the amount of time to provide a response on a personality test reduced socially desirable responding, but primarily for individuals with low cognitive ability.

    Find this resource:

Response Verification and Elaboration

This approach to reducing faking involves focusing on certain types of items or testing formats that solicit responses that could be verified. The elaboration method, for instance, requires respondents to provide details supporting a multiple-choice response. These methods are believed to discourage respondents from misrepresenting themselves by making the task of faking more difficult, increasing the likelihood of detection, or both. Becker and Colquitt 1992 focuses on verification, whereas Schmitt, et al. 2003 and Lievens and Peeters 2008 focus on elaboration. Graham, et al. 2002 examines how some types of items naturally produce responses that are more verifiable or objective. Alliger and Dwight 2001 explores the implications of using items that solicit personal information invasively to reduce faking.

  • Alliger, George M., and Stephen A. Dwight. “Invade or Evade? The Trade-Off between Privacy and Invasion and Item Fakability.” Applied HRM Research 6.2 (2001): 95–104.

    Save Citation »Export Citation » Share Citation »

    Using items from an integrity test, the authors found a link between an item’s susceptibility to faking and perceptions of the degree to which it was an invasion of privacy. They discuss implications for administering overt integrity tests that may be fakable, balanced against concerns about invasiveness.

    Find this resource:

  • Becker, T. E., and Alan L. Colquitt. “Potential versus Actual Faking of a Biodata Form: An Analysis along Several Dimensions of Item Type.” Personnel Psychology 45.2 (1992): 389–406.

    DOI: 10.1111/j.1744-6570.1992.tb00855.xSave Citation »Export Citation » Share Citation »

    Through an examination of different characteristics of items, this study sparked thinking about methods for deterring faking by using items that were more objective and observable/verifiable.

    Find this resource:

  • Graham, Kenneth E., Michael A. McDaniel, Elizabeth F. Douglas, and Andrea F. Snell. “Biodata Validity Decay and Score Inflation with Faking: Do Item Attributes Explain Variance across Items?” Journal of Business and Psychology 16.4 (2002): 573–592.

    DOI: 10.1023/A:1015454319119Save Citation »Export Citation » Share Citation »

    An empirical examination showing that the most valid items for a biographical data measure differ in type for honest versus faking respondents, implying that people engage in different response processes when faking.

    Find this resource:

  • Lievens, Filip, and Helga Peeters. “Impact of Elaboration on Responding to Situational Judgment Test Items.” International Journal of Selection and Assessment 16.4 (2008): 345–355.

    DOI: 10.1111/j.1468-2389.2008.00440.xSave Citation »Export Citation » Share Citation »

    Discusses the potential for expanding the elaboration method to situational judgment tests, for which respondents must offer specific reasons (rationale) for their decisions.

    Find this resource:

  • Schmitt, Neal, Fred L. Oswald, Brian H. Kim, Michael A. Gillespie, Lauren J. Ramsay, and Tae-Yong Yoo. “Impact of Elaboration on Socially Desirable Responding and the Validity of Biodata Measures.” Journal of Applied Psychology 88.6 (2003): 979–988.

    DOI: 10.1037/0021-9010.88.6.979Save Citation »Export Citation » Share Citation »

    This study tests the previously proposed elaboration method as a faking deterrent, for which people are required to justify basic multiple-choice responses with supporting details. Discusses alternative explanations that cloud the application of the elaboration method.

    Find this resource:

Detecting and Correcting for Faking

Another major approach to addressing faking in selection contexts is to attempt to detect and correct for it. The idea is that, rather than attempting to prevent or reduce this behavior, faking might be addressed by assessing it and then correcting for it in some way (e.g., by adjusting applicant personality scores based on the amount they have faked). This notion has a relatively long history, with many researchers attempting to identify ways of measuring response distortion. However, it has proven to be rather difficult to find an effective approach to detecting and correcting for faking.

Measurement

Faking measurement is important both for improving our understanding of faking behavior in research and for identifying individuals who have faked in practice. Furthermore, being able to measure how much a faked response differs from an honest response suggests that it may be possible to estimate, or recover, true scores with mathematical “corrections” to faked test scores. Measures of faking have been developed based on several approaches. Hartshorne and May 1928 represents an example of early work focusing on extreme responses. Crowne and Marlowe 1964 and Paulhus 2002 address social desirability scales. Burns and Christiansen 2006 and Griffith and Peterson 2008 discuss issues and evidence related to social desirability scales. Kuncel and Tellegen 2009 examines social desirability at the item level. Zickar, et al. 2004 approaches this from an item response theory perspective. Burns and Christiansen 2011 reviews and compares different measurement methods. Overall, one conclusion we can draw from the literature is that, at present, our methods are generally better at catching obvious faking than subtle or sophisticated forms.

  • Burns, Gary N., and Neil D. Christiansen. “Sensitive or Senseless: On the Use of Social Desirability Measures in Selection and Assessment.” In A Closer Examination of Applicant Faking Behavior. Edited by Richard L. Griffith and Mitchell H. Peterson, 115–150. Greenwich, CT: Information Age, 2006.

    Save Citation »Export Citation » Share Citation »

    This chapter reviews work on social desirability scales, including their nature, relationship with job performance, and use in personnel selection.

    Find this resource:

  • Burns, Gary N., and Neil D. Christiansen. “Methods of Measuring Faking Behavior.” Human Performance 24.4 (2011): 358–372.

    DOI: 10.1080/08959285.2011.597473Save Citation »Export Citation » Share Citation »

    Reviews different methods for measuring faking, discussing advantages and disadvantages of each method.

    Find this resource:

  • Crowne, Douglas P., and David Marlowe. The Approval Motive: Studies in Evaluative Dependence. New York: Wiley, 1964.

    Save Citation »Export Citation » Share Citation »

    Introduces a popular social desirability scale to measure this characteristic of a test in normal versus psychopathological contexts. Rather than based strictly on statistical deviance, this scale focuses on items endorsed for culturally sanctioned but improbable behaviors.

    Find this resource:

  • Griffith, Richard L., and Mitchell H. Peterson. “The Failure of Social Desirability Measures to Capture Applicant Faking Behavior.” Industrial and Organizational Psychology 1.3 (2008): 308–311.

    DOI: 10.1111/j.1754-9434.2008.00053.xSave Citation »Export Citation » Share Citation »

    Summarizes a number of criticisms of the use of socially desirable responding scales to measure applicant faking.

    Find this resource:

  • Hartshorne, Hugh, and Mark A. May. Studies in Deceit. New York: Macmillan, 1928.

    Save Citation »Export Citation » Share Citation »

    This book covers a set of studies that explores a variety of ways to detect deception in educational settings. Most techniques were based on the general notion that deceptive individuals gave responses that were extreme enough (typically in a socially desirable direction) to designate them as statistically deviant (outliers). This sparked the development of scales for detecting lying and faked responses in social, clinical, and work settings.

    Find this resource:

  • Kuncel, Nathan R., and Auke Tellegen. “A Conceptual and Empirical Reexamination of the Measurement of the Social Desirability of Items: Implications for Detecting Desirable Response Style and Scale Development.” Personnel Psychology 62.2 (2009): 201–228.

    DOI: 10.1111/j.1744-6570.2009.01136.xSave Citation »Export Citation » Share Citation »

    Proposes a model of the way in which social desirability judgments relate to honest/faked test responses, implying that using scales to detect faking requires proper application.

    Find this resource:

  • Paulhus, Delroy L. “Socially Desirable Responding: The Evolution of a Construct.” In The Role of Constructs in Psychological and Educational Measurement. Edited by H. I. Braun, D. N. Jackson, and D. E. Wiley, 49–69. Mahwah, NJ: Erlbaum, 2002.

    Save Citation »Export Citation » Share Citation »

    Paulhus explains the definitions of socially desirable responding that underlie his popular scales, including developments since the original publication.

    Find this resource:

  • Zickar, Michael J., Robert E. Gibby, and Chet Robie. “Uncovering Faking Samples in Applicant, Incumbent, and Experimental Data Sets: An Application of Mixed-Model Item Response Theory.” Organizational Research Methods 7.2 (2004): 168–190.

    DOI: 10.1177/1094428104263674Save Citation »Export Citation » Share Citation »

    Uses item response theory models to detect specific patterns of faked responses.

    Find this resource:

Bogus Items

Another approach to assessing faking is to use bogus items (e.g., those referring to nonexistent tasks). The logic underlying this approach is relatively straightforward: applicants who endorse these items are attempting to look favorable rather than respond accurately. Anderson, et al. 1984; Bing, et al. 2011; and Pannone 1984 present evidence related to this technique.

  • Anderson, Cathy, Jack L. Warner, and Cassie C. Spence. “Inflation Bias in Self-Assessment Examinations: Implications for Valid Employee Selection.” Journal of Applied Psychology 69.4 (1984): 574–580.

    DOI: 10.1037/0021-9010.69.4.574Save Citation »Export Citation » Share Citation »

    This paper demonstrates that bogus items (referring to nonexistent tasks) could be used to detect and correct inflation bias.

    Find this resource:

  • Bing, Mark N., Don Kluemper, H. Kristl Davison, Shannon Taylor, and Milorad Novicevic. “Overclaiming as a Measure of Faking.” Organizational Behavior and Human Decision Processes 116.1 (2011): 148–162.

    DOI: 10.1016/j.obhdp.2011.05.006Save Citation »Export Citation » Share Citation »

    Presented evidence that the overclaiming technique—where respondents indicate familiarity with names and topics, including some that are nonexistent—can capture individual differences in faking.

    Find this resource:

  • Pannone, Ronald D. “Predicting Test Performance: A Content Valid Approach to Screening Applicants.” Personnel Psychology 37.3 (1984): 507–514.

    DOI: 10.1111/j.1744-6570.1984.tb00526.xSave Citation »Export Citation » Share Citation »

    Detected direct lies with the use of bogus items in a considerable portion of a sample.

    Find this resource:

Response Time Latencies

One other approach to measuring faking is to rely on response time latencies (i.e., how long it takes test-takers to respond to items). The connection between response time latency and faking may be somewhat complex. Thus, a number of studies have focused on this connection and the potential for using it to assess deception. Vasilopoulos, et al. 2000 examines the role of job familiarity. Walczyk, et al. 2005 examines a cognitive model in this context. Fine and Pirak 2016 explores a within-person approach. Van Hooft and Born 2012 includes eye-tracking. Holden 1998 is based on Holden’s earlier model. Maricuţoiu and Sârbescu 2019 is a meta-analysis.

  • Fine, Saul, and Merav Pirak. “Faking Fast and Slow: Within-Person Response Time Latencies for Measuring Faking in Personnel Testing.” Journal of Business and Psychology 3.1 (2016): 1–14.

    Save Citation »Export Citation » Share Citation »

    Develops a within-person index to measure response time latencies controlled against honest, baseline values.

    Find this resource:

  • Holden, Ronald R. “Detecting Fakers on a Personnel Test: Response Latencies versus a Standard Validity Scale.” Journal of Social Behavior and Personality 13.2 (1998): 387–398.

    Save Citation »Export Citation » Share Citation »

    Based on the model in Holden, et al. 1992 (cited under Models of Faking), the author proposes and tests a method of detecting faking based on response time latencies.

    Find this resource:

  • Maricuţoiu, Laurenţiu P., and Paul Sârbescu. “The Relationship between Faking and Response Latencies: A Meta-analysis.” European Journal of Psychological Assessment 35.1 (2019): 3–13.

    DOI: 10.1027/1015-5759/a000361Save Citation »Export Citation » Share Citation »

    Reports a meta-analysis examining the relationship between faking and response latencies. Results suggest that dishonest responding involves longer response latencies compared with honest responding.

    Find this resource:

  • van Hooft, Edwin A. J., and Marise Ph. Born. “Intentional Response Distortion on Personality Tests: Using Eye-Tracking to Understand Response Processes When Faking.” Journal of Applied Psychology 97.2 (2012): 301–316.

    DOI: 10.1037/a0025711Save Citation »Export Citation » Share Citation »

    Explores the use of eye-tracking technology to detect fixation patterns and of response time latencies to detect faking on personality and integrity tests.

    Find this resource:

  • Vasilopoulos, Nicholas L., Richard R. Reilly, and Julia A. Leaman. “The Influence of Job Familiarity and Impression Management on Self-Report Measure Scale Scores and Response Latencies.” Journal of Applied Psychology 85.1 (2000): 50–64.

    DOI: 10.1037/0021-9010.85.1.50Save Citation »Export Citation » Share Citation »

    Examines response time latencies for faked responses and shows that patterns may depend on whether a person is familiar with aspects of the job.

    Find this resource:

  • Walczyk, Jeffrey J., Jonathan P. Schwartz, Rayna Clifton, Barett Adams, Min Wei, and Peijia Zha. “Lying Person-to-Person about Life Events: A Cognitive Framework for Lie Detection.” Personnel Psychology 58.1 (2005): 141–170.

    DOI: 10.1111/j.1744-6570.2005.00484.xSave Citation »Export Citation » Share Citation »

    Proposes a framework for detecting lies based on response time latencies and discusses its potential applications.

    Find this resource:

Effects of Correcting

Research has also examined the extent to which personality scores can be effectively corrected for faking. The notion is that if faking can be appropriately measured, practitioners might use this information to correct applicant scores (e.g., by making adjustments based on amount of faking). Goffin and Christiansen 2003 finds that this technique is fairly common. Christiansen, et al. 1994; Hough 1998; and Ellingson, et al. 1999 examine the effects of attempting to correct. Schmitt and Oswald 2006 explores the further step of removing applicants who fake from the applicant pool.

  • Christiansen, Neil D., Richard D. Goffin, Norman G. Johnston, and Mitchell G. Rothstein. “Correcting the 16PF for Faking: Effects on Criterion-Related Validity and Individual Hiring Decisions.” Personnel Psychology 47.4 (1994): 847–860.

    DOI: 10.1111/j.1744-6570.1994.tb01581.xSave Citation »Export Citation » Share Citation »

    Examines how corrections for faking might impact actual selection decisions (e.g., rank ordering candidates), even if statistics are not improved, such as criterion-related validity.

    Find this resource:

  • Ellingson, Jill E., Paul R. Sackett, and Leaetta M. Hough. “Social Desirability Corrections in Personality Measurement: Issues of Applicant Comparison and Construct Validity.” Journal of Applied Psychology 84.2 (1999): 155–166.

    DOI: 10.1037/0021-9010.84.2.155Save Citation »Export Citation » Share Citation »

    In contrast to previous studies, the authors found little support for the notion that faking corrections enable the “recovery” of true, honest scores and lead to an accurate rank-ordering of applicants.

    Find this resource:

  • Goffin, Richard D., and Neil D. Christiansen. “Correcting Personality Tests for Faking: A Review of Popular Personality Tests and an Initial Survey of Researchers.” International Journal of Selection and Assessment 11.4 (2003): 340–344.

    DOI: 10.1111/j.0965-075X.2003.00256.xSave Citation »Export Citation » Share Citation »

    The results of this study suggest that many practitioners use faking scales and apply correction procedures in practice when using personality tests.

    Find this resource:

  • Hough, Leaetta M. “Effects of Intentional Distortion in Personality Measurement and Evaluation of Suggested Palliatives.” Human Performance 11.2–3 (1998): 209–244.

    Save Citation »Export Citation » Share Citation »

    The author reviews various approaches for addressing widespread faking, and tests two methods for correcting personality test scores based on a faking measure. Special attention is given to avoiding errors that would accuse honest respondents of having faked.

    Find this resource:

  • Schmitt, Neal, and Frederick. L. Oswald. “The Impact of Corrections for Faking on the Validity of Noncognitive Measures in Selection Settings.” Journal of Applied Psychology 91.3 (2006): 613–621.

    DOI: 10.1037/0021-9010.91.3.613Save Citation »Export Citation » Share Citation »

    Demonstrates that correcting faked scores with estimates of what more honest responses would be, or removing faking applicants from a candidate pool entirely, will likely have a minimal effect on mean performance levels.

    Find this resource:

Faking in Employment Interviews

Several studies have examined faking in employment interviews. Faking in this context may be somewhat different because it involves dynamic, interpersonal interactions rather than item responding. Levashina and Campion 2007 and Reinhard, et al. 2013 focus on assessment. Delery and Kacmar 1998; van Iddekinge, et al. 2007; Levashina and Campion 2006; and Buehl, et al. 2019 focus on antecedents. Lievens and Peeters 2008 examines interviewer sensitivity to impression management. Van Iddekinge, et al. 2005 examines the potential for faking on a personality-focused interview. Bond and DePaulo 2011 discusses lie detection in face-to-face interactions more broadly.

  • Bond, Charles F., Jr., and Bella DePaulo. Is Anyone Really Good at Detecting Lies? Lexington, KY: CreateSpace Independent Publishing Platform, 2011.

    Save Citation »Export Citation » Share Citation »

    This collection of papers discusses the vast literature on the challenges of detecting lies in face-to-face interactions. This represents a much larger body of research from outside employment contexts, but it pertains to interactions in employment interviews.

    Find this resource:

  • Buehl, Anne-Kathrin, Klaus G. Melchers, Therese Macan, and Jana Kühnel. “Tell Me Sweet Little Lies: How Does Faking in Interviews Affect Interview Scores and Interview Validity?” Journal of Business and Psychology 34.1 (2019): 107–124.

    DOI: 10.1007/s10869-018-9531-3Save Citation »Export Citation » Share Citation »

    Examines interviewees’ ability to fake interviews. Findings indicate that interviewees were able to improve their scores and the amount of faking correlated with their cognitive ability and ability to identify targeted interview dimensions.

    Find this resource:

  • Delery, John E., and K. Michele Kacmar. “The Influence of Applicant and Interviewer Characteristics on the Use of Impression Management.” Journal of Applied Social Psychology 28.18 (1998): 1649–1669.

    DOI: 10.1111/j.1559-1816.1998.tb01339.xSave Citation »Export Citation » Share Citation »

    Examines how the use of impression management tactics in mock interviews (as part of an employment interview training program) varied as a function of both the interviewee and the interviewer.

    Find this resource:

  • Levashina, Julia, and Michael A. Campion. “A Model of Faking Likelihood in the Employment Interview.” International Journal of Selection and Assessment 14.4 (2006): 299–316.

    DOI: 10.1111/j.1468-2389.2006.00353.xSave Citation »Export Citation » Share Citation »

    These authors apply more generic models of faking to explain how faking might be displayed, constrained, and detected specifically in employment interviews.

    Find this resource:

  • Levashina, Julia, and Michael A. Campion. “Measuring Faking in the Employment Interview: Development and Validation of an Interview Faking Behavior Scale.” Journal of Applied Psychology 92.6 (2007): 1638–1656.

    DOI: 10.1037/0021-9010.92.6.1638Save Citation »Export Citation » Share Citation »

    Introduces a taxonomy of faking behaviors that might be performed within a job interview, based on theories of impression management “tactics”/“behaviors.”

    Find this resource:

  • Lievens, Filip, and Helga Peeters. “Interviewers’ Sensitivity to Impression Management Tactics in Structured Interviews.” European Journal of Psychological Assessment 24.3 (2008): 174–180.

    DOI: 10.1027/1015-5759.24.3.174Save Citation »Export Citation » Share Citation »

    Examines whether interviewers are likely to notice different kinds of impression management tactics in verbal and nonverbal behaviors.

    Find this resource:

  • Reinhard, Marc-Andre, Martin Scharmach, and Patrick Müller. “It’s Not What You Are, It’s What You Know: Experience, Beliefs, and the Detection of Deception in Employment Interviews.” Journal of Applied Social Psychology 43.3 (2013): 467–479.

    DOI: 10.1111/j.1559-1816.2013.01011.xSave Citation »Export Citation » Share Citation »

    Draws connections between research on the detection of lies in social interactions (including the use of verbal and nonverbal cues) and the research on impression management tactics to examine when faking might be detected in employment interviews.

    Find this resource:

  • van Iddekinge, Chad H., Lynn A. McFarland, and Patrick H. Raymark. “Antecedents of Impression Management Use and Effectiveness in a Structured Interview.” Journal of Management 33.5 (2007): 752–773.

    Save Citation »Export Citation » Share Citation »

    Explores factors influencing the use of impression management behaviors/tactics in employment interviews, where some represent forms of faking.

    Find this resource:

  • van Iddekinge, Chad H., Patrick H. Raymark, and Philip L. Roth. “Assessing Personality with a Structured Employment Interview: Construct-Related Validity and Susceptibility to Response Inflation.” Journal of Applied Psychology 90.3 (2005): 536–552.

    DOI: 10.1037/0021-9010.90.3.536Save Citation »Export Citation » Share Citation »

    Explores the possibility that interviews specifically designed to assess personality can be faked.

    Find this resource:

Faking on Other Assessments

Given that a variety of other assessment tools are also used in employee selection, faking on several other assessment types has also been explored. Becker and Colquitt 1992 focuses on faking on biodata items; Day and Carroll 2008 addresses faking on emotional intelligence (EI) measures; and Röhner, et al. 2013 discusses faking on Implicit Association Tests (IATs). Additionally, recent research has investigated deceptive behaviors from a wide variety of perspectives. Schroeder and Cavanaugh 2018 provides evidence related to faking on social networking sites (SNSs), and Johnson, et al. 2004 discusses brain functioning related to deception from a neuropsychological perspective.

  • Becker, Thomas E., and Alan L. Colquitt. “Potential versus Actual Faking of a Biodata Form: An Analysis along Several Dimensions of Item Type.” Personnel Psychology 45.2 (1992): 389–406.

    DOI: 10.1111/j.1744-6570.1992.tb00855.xSave Citation »Export Citation » Share Citation »

    Explores faking on biodata items. The authors discuss susceptibility to faking, examining fake-good experimental data as well as real selection data.

    Find this resource:

  • Day, Arla L., and Sarah A. Carroll. “Faking Emotional Intelligence (EI): Comparing Response Distortion on Ability and Trait-Based EI Measures.” Journal of Organizational Behavior 29.6 (2008): 761–784.

    DOI: 10.1002/job.485Save Citation »Export Citation » Share Citation »

    Examines the effects of faking on EI measures, comparing faking susceptibility for two types of EI measures: ability-based and trait-based.

    Find this resource:

  • Johnson, Ray, Jack Barnhardt, and John Zhu. “The Contribution of Executive Processes to Deceptive Responding.” Neuropsychologia 42.7 (2004): 878–901.

    DOI: 10.1016/j.neuropsychologia.2003.12.005Save Citation »Export Citation » Share Citation »

    Explores patterns of medial frontal brain activity when individuals are deceptive, examining response-related control processes associated with making deceptive responses.

    Find this resource:

  • Röhner, Jessica, Michela Schröder-Abé, and Astrid Schütz. “What Do Fakers Actually Do to Fake the IAT? An Investigation of Faking Strategies under Different Faking Conditions.” Journal of Research in Personality 47.4 (2013): 330–338.

    DOI: 10.1016/j.jrp.2013.02.009Save Citation »Export Citation » Share Citation »

    Investigates strategies used to fake IAT, faking directions (i.e., low or high scores), pre-existing knowledge about how to fake the IAT, and faking success.

    Find this resource:

  • Schroeder, Amber N. and Jacqulyn M. Cavanaugh. “Fake It ‘til You Make It: Examining Faking Ability on Social Media Pages.” Computers in Human Behavior 84 (2018): 29–35.

    DOI: 10.1016/j.chb.2018.02.011Save Citation »Export Citation » Share Citation »

    Demonstrates that individuals are able to manipulate profiles posted on SNSs in order to convey a certain image, and examines individual factors related to faking ability on SNSs and strategies predictive of faking success.

    Find this resource:

back to top

Article

Up

Down