Jump to Content Jump to Main Navigation

Psychology Gambler's Fallacy
by
Ulrike Hahn

Introduction

The Gambler’s Fallacy is a mistaken belief about sequences of random events. Observing, for example, a long run of “black” on the roulette wheel leads to an expectation that “red” is now more likely to occur on the next trial. In other words, the Gambler’s Fallacy is the belief that a “run” or “streak” of a given outcome lowers the probability of observing that outcome on the next trial. The Gambler’s Fallacy is one of several biases or errors found in people’s perceptions of randomness. For statistically independent events such as the outcomes of a coin toss or a roulette wheel, there is simply no connection between events; coins and roulette wheels have no memory, and there can consequently be no systematic connection between the outcomes on successive trials. Nevertheless, the Gambler’s Fallacy can feel intuitively quite compelling, even to the statistically minded, and while it has often been fashionable within psychology to highlight humans’ irrationality and cognitive frailty, understanding the nature of chance really requires knowledge of probability theory, combinatorics, and randomness. Without these tools, fallacious and non-fallacious beliefs about chance cannot reliably be distinguished (see Why the Gambler’s Fallacy is a Fallacy).There are presently no monographs or introductions specifically on the Gambler’s Fallacy, so an understanding of the topic must be assembled from a range of different sources.

Introductory Works

Very brief but accessible first descriptions of the Gambler’s Fallacy itself can be found in many texts, such as Gardner 1978, Hardman 2009, and Howell 2010.

General Overviews

A number of review articles, including Bar-Hillel and Wagenaar 1991 and Oskarsson, et al. 2009, provide overviews of psychological research on people’s perceptions of randomness, and, as part of this, the Gambler’s Fallacy.

  • Bar-Hillel, Maya, and Willem A. Wagenaar. 1991. The perception of randomness. Advances in Applied Mathematics 12:428–454.

    DOI: 10.1016/0196-8858(91)90029-ISave Citation »Export Citation »E-mail Citation »

    A good, clear review of studies up to the year 1991, split by methodology. The paper also examines the question of why people may have the perceptions they do, including the question of why our notions of randomness have not been “corrected” by feedback.

    Find this resource:

  • Oskarsson, An T., Leaf van Boven, Gary H. McClelland, and Reid Hastie. 2009. What’s next? Judging sequences of binary events. Psychological Bulletin 135:262–285.

    DOI: 10.1037/a0014821Save Citation »Export Citation »E-mail Citation »

    A recent review of empirical work on perceptions of randomness drawn from a number of contexts. Broad in coverage, this makes a good starting point. The review does not, however, aim to be comprehensive, so that older reviews such as Bar-Hillel and Wagenaar 1991 remain useful.

    Find this resource:

Why the Gambler’s Fallacy is a Fallacy

Given a fair coin, the most likely outcome—at least in the long run—is a roughly equal proportion of heads and tails in the sequence. The failure of a sample to reflect that expected proportion is indicative of a biased coin, and statistical tests would indicate that when, for example, we observe six heads (H) and no tails (T) in a sequence of six throws of the coin, then we are unlikely to be dealing with a fair coin. However, the sequence HHHHHH is no less likely than the sequence HTHHTT. This tension between sequence proportion and sequence order seems almost paradoxical and dissolves upon realization that the reason why a relative proportion of three heads and three tails is more likely than a uniform run of heads is that there are many more possible sequences that contain the former proportion than there are sequences that contain only heads. In this case, there is one sequence (HHHHHH) containing six heads, but twenty possible sequences with an equal proportion of heads and tails (HTHTHT, THTHTH, HHTTHT, etc.). Throwing a coin exactly six times is equivalent to choosing one of these sequences: each individual sequence is equally likely to be selected, but because there are many more sequences with roughly equal proportions of heads and tails, ending up with one of roughly equal proportion is more likely. Such analysis is made possible by the mathematical tools of combinatorics and probability theory (for an introduction, see Howell 2010). Without such tools, the fallacy is not easy to spot. Even those trained in the Gambler’s Fallacy often hold a related misperception based on the “law of averages”: many believe that when keeping a running count of heads and tails observed so far, then in anything but short sequences, heads will be in the lead roughly half the time, and roughly half the time it will be tails. However, it is likely that either heads or tails will be in the overall lead throughout the sequence (though it is likely that in a group of many such sequences half of them will see heads and half will see tails leading; see Beltrami 1999). Similarly, Hahn and Warren 2009 presents a gambling game that seems only very subtly different from the normal gambling situation, but involves a winning strategy: if a prior limit of twenty coin tosses were decided and one person were to bet that somewhere in those twenty coin tosses the sequence HHHT would occur, and another were to bet on sequence HHHH, then, on average, the first person would win. These examples make clear why some grounding in probability, combinatorics, and randomness is required to understand psychological research on the Gambler’s Fallacy.

Randomness

Even more recent in the history of mathematics than probability theory and combinatorics are systematic attempts to understand and define the concept of randomness (see Beltrami 1999 for an excellent introduction). Moreover, there are different conceptions of what it means for something to be random, though systematic connections between different conceptions tend to exist. One fundamental distinction is that between randomness as a property of a generating source and randomness as a property of a given output. The quintessential random source is the p=1/2 Bernoulli trial, exemplified by an unbiased, fair coin. Here each trial is independent of the previous and, on each trial, both outcomes (heads H, tails T) are equally likely. Hence the outcome of the next trial is maximally unpredictable. However, such a random source can generate sequences (such as a run of heads, HHHHHHHHHHHH) that as sequences would not be considered random. A number of formal frameworks can be applied to the question of whether or not a sequence itself should be viewed as random and allow quantification of how random a sequence is. From an information theoretic perspective, randomness is equivalent to maximum information content (entropy). From the perspective of algorithmic complexity theory, a sequence is random if its algorithmic complexity is maximal, relative to the length of the sequence. The algorithmic complexity of a sequence is the length of the shortest program that will reproduce that sequence when run on a computer. Sequences that can be given concise descriptions (a run of 100 heads) will have low algorithmic complexity, and hence will be non-random. In the context of the Gambler’s Fallacy, some basic familiarity with conceptions of randomness is helpful for two reasons. For one, some of the psychological research on randomness is problematic in light of the fact that its methods and materials do not really connect with well-defined notions of randomness (see Nickerson 2002). Second, some of the psychological theories of what underlies human misperceptions such as the Gambler’s Fallacy have close connections to formal frameworks (see Theoretical Accounts, and Lopes 1982).

  • Beltrami, Edward J. 1999. What is random? Chance and order in mathematics and life. New York: Springer.

    Save Citation »Export Citation »E-mail Citation »

    An excellent introductory textbook on the topic of randomness, which manages to be both readable and precise. Highly recommended.

    Find this resource:

  • Lopes, Lola L. 1982. Doing the impossible: A note on induction and the experience of randomness. Journal of Experimental Psychology: Learning, Memory, and Cognition 8:26–36.

    Save Citation »Export Citation »E-mail Citation »

    Highlights the difficulties and conceptual problems inherent in psychological research on randomness. Specifically, the article relates seeming errors in people’s perceptions of randomness to theoretical frameworks for the understanding of randomness that make similar claims. It also details how the outputs of random processes may readily fail tests for randomness, and advances the idea that biases, to the extent that they exist, may not be costly in the real world.

    Find this resource:

  • Nickerson, Raymond S. 2002. The production and perception of randomness. Psychological Review 109:330–357.

    DOI: 10.1037/0033-295X.109.2.330Save Citation »Export Citation »E-mail Citation »

    A critical overview of the theoretical complexities inherent in the notion of randomness and the consequences this has for psychological research. The article surveys not only different conceptions of randomness but also the many different strands to psychological research on randomness, detailing how and why a considerable proportion of that research suffers from theoretical difficulties that render the results virtually uninterpretable. Essential reading and an excellent starting point.

    Find this resource:

Detecting the Fallacy in People’s Behavior

Various methodologies have been used to examine people’s perceptions of randomness, most importantly random Sequence Generation and Sequence Judgment tasks.

Sequence Generation

Most early psychological work on randomness focused on generation tasks that require participants to generate random sequences (Wagenaar 1972; and for more recent work, Oskarsson, et al. 2009). Nevertheless, the specifics of the generation task (e.g., number of possible outcomes, length of the to be generated sequence, degree to which sequence so far remains accessible to participants) as well as of the tests for randomness applied in analysis vary so widely that Wagenaar 1972 concluded for the fifteen studies he reviewed that “there is no way of combining the results … into one coherent theory” (p. 69). That said, negative recency—that is, a bias against repetition that might be viewed as indicative of the Gambler’s Fallacy—is a common finding in these studies. One interesting variant of the sequence generation task is the embedding of it into the context of a competitive game. Rapoport and Budescu 1992 presented participants with a game context in which random responding maximizes performance. The instructions for this task make no mention of randomness (thus avoiding any instructional biases), and participants were given financial incentives for optimal responding. The resultant sequences were compared to those obtained in a more “classic” sequence generation task. While there was still some evidence of negative recency in performance on the game, the majority of participants did not display this pattern, whereas the majority of these same participants did display negative recency in the standard sequence generation task. Also sometimes classed as sequence generation tasks, though quite different in spirit, are prediction tasks that provide participants with a sequence and ask them to “continue the series” or “predict the next item/outcome” in the series (e.g., Edwards 1961). Here too, negative recency effects have been observed, though in Edwards’s classic study these decreased with participants’ experience.

  • Edwards, Ward. 1961. Probability learning in 1000 trials. Journal of Experimental Psychology 62:385–394.

    DOI: 10.1037/h0041970Save Citation »Export Citation »E-mail Citation »

    A classic study using the prediction methodology. As participants’ experience with the sequences increases, predictions compatible with the Gambler’s Fallacy decrease. See also H. Lindman and W. Edwards, (1961), “Supplementary report: Unlearning the Gambler’s Fallacy,” Journal of Experimental Psychology 62:630.

    Find this resource:

  • Oskarsson, An T., Leaf van Boven, Gary H. McClelland, and Reid Hastie. 2009. What’s next? Judging sequences of binary events. Psychological Bulletin 135:262–285.

    DOI: 10.1037/a0014821Save Citation »Export Citation »E-mail Citation »

    The most recent review of empirical work on perceptions of randomness. Broad in coverage, this makes a good starting point. The review does not, however, aim to be comprehensive, so that older reviews such as Bar-Hillel and Wagenaar 1991 (cited under General Overviews) remain useful and important. Likewise, for critical evaluation, Nickerson 2002 (cited under Randomness) provides an essential complement.

    Find this resource:

  • Rapoport, Amnon, and David V. Budescu. 1992. Generation of random series in two-person strictly competitive games. Journal of Experimental Psychology: General 121:352–363.

    DOI: 10.1037/0096-3445.121.3.352Save Citation »Export Citation »E-mail Citation »

    An important study of random-sequence generation that avoids the methodological problems that plague most sequence generation studies by embedding sequence production within a competitive context, which obviates the need to mention the words “random” or “randomness” to participants at all. The paper also contains very good critiques of past research.

    Find this resource:

  • Wagenaar, Willem A. 1972. Generation of random sequences by human subjects: A critical survey of the literature. Psychological Bulletin 77:65–72.

    DOI: 10.1037/h0032060Save Citation »Export Citation »E-mail Citation »

    A critical review of early sequence generation studies.

    Find this resource:

Methodological Problems of Sequence Generation Tasks

Instructions to participants are often arguably theoretically ambiguous or even incoherent. For example, asking participants to generate a random sequence such as “one might see from an unbiased coin” makes reference to a random generating source, but given that all possible sequences of the length participants are asked to generate are equally likely as outputs from an unbiased coin (see Randomness), it is unclear what that should actually mean. Participants are simultaneously being asked to mirror a random generating source as well as generate an output that is itself “random”—even though these two notions are not identical, and random sources such as unbiased coins can readily generate sequences such as uniform runs which themselves are “non-random.” It is thus unclear whether resultant errors and biases should inherently be attributed to participants or to the instructions of the experimenter (Ayton, et al. 1989; Nickerson 2002). Some of these conceptual difficulties are alleviated if participants are asked to generate an entire set of random sequences. Nickerson and Butler 2009 tested sequence generation in this way and found that, although there were deviations, the distributional properties of participants’ sequences were qualitatively similar to those of a truly random source, with little evidence of the Gambler’s Fallacy. However, it always remains unclear in generation tasks whether observed deviations are “accurate reflections of biased notions of randomness, or biased reflections of accurate notions of randomness (or both)” (Bar-Hillel and Wagenaar 1991). In particular, the tendency to over-alternate outcomes more than would be expected from a truly random source that is viewed as evidence of the Gambler’s Fallacy could be the product of a system that was unbiased but subject to short-term memory limitations (see Kareev 1992 and the section on The Role of Memory within Theoretical Accounts). Short sequences have alternation rates that lie above the long-term average for purely mathematical reasons. Hence, even a truly random source, which just happened to be generating outputs via a succession of incrementally produced, shorter sequences, would produce alternation rates indicative of the Gambler’s Fallacy.

  • Ayton, Peter, Anne J. Hunt, and George Wright. 1989. Psychological conceptions of randomness. Journal of Behavioral Decision Making 2:221–238.

    DOI: 10.1002/bdm.3960020403Save Citation »Export Citation »E-mail Citation »

    Essential reading for anyone interested in the psychology of randomness, this paper provides an extensive critique of behavioural paradigms used for assessing human beings’ conceptions of randomness. In particular, the paper details extensively how experimenter’s instructions often convey particular task demands to participants that are then at odds with how the resultant performance is assessed.

    Find this resource:

  • Bar-Hillel, Maya, and Willem A. Wagenaar. 1991. The perception of randomness. Advances in Applied Mathematics 12:428–454.

    DOI: 10.1016/0196-8858(91)90029-ISave Citation »Export Citation »E-mail Citation »

    A good, clear review of studies up to the year 1991, split by methodology. Although now a bit dated, this review remains useful, not least for its theoretical suggestions. In particular it also examines the question of why people may have the perceptions they do, including the question of why our notions of randomness have not been “corrected” by feedback.

    Find this resource:

  • Kareev, Yaakov. 1992. Not that bad after all: Generation of random sequences. Journal of Experimental Psychology: Human Perception and Performance 18:1189–1194.

    DOI: 10.1037/0096-1523.18.4.1189Save Citation »Export Citation »E-mail Citation »

    An examination of the impact of short-term memory limitations on alternation rates. See also Theoretical Accounts.

    Find this resource:

  • Nickerson, Raymond S. 2002. The production and perception of randomness. Psychological Review 109:330–357.

    DOI: 10.1037/0033-295X.109.2.330Save Citation »Export Citation »E-mail Citation »

    The article describes the main methodologies used in psychological research on randomness, detailing how and why a considerable proportion of that research suffers from theoretical difficulties that render the results virtually uninterpretable. Essential reading.

    Find this resource:

  • Nickerson, Raymond S., and S. F. Butler. 2009. On producing random binary sequences. American Journal of Psychology 122:141–151.

    Save Citation »Export Citation »E-mail Citation »

    The article involves a sequence generation task but, crucially, differs from most other studies by asking each participant to produce an entire set of random sequences, instead of just a single sequence. This difference is theoretically important as the authors clearly describe, but also of practical consequence because little evidence for the Gambler’s Fallacy is found in this version of the task.

    Find this resource:

Sequence Judgment

In order to avoid the many interpretative difficulties that plague generation tasks, researchers have also tested people’s perceptions of randomness by asking participants to evaluate experimenter-provided sequences. A famous example of this is Kahneman and Tversky’s (1972) study in which participants were told: “All families of six children in a city were surveyed. In 72 families the exact order of births of boys and girls was GBGBBG. What is your estimate of the number of families surveyed in which the exact order of births was BGBBBB?” (p. 432). Both orders are equally likely, yet participants think the second is less likely than the first, thus displaying systematic biases in their judgments of randomness as well (for other judgment studies see Oskarsson, et al. 2009, cited under General Overviews). However, judgment studies have their own methodological difficulties and have equally involved highly problematic instructions for participants (see Nickerson 2002, cited under Randomness). Lopes and Oden 1987 highlights how problematic it is to ask participants to indicate, for example, whether a particular sequence was generated by a random or a non-random process, if no information about the non-random process is provided (see also Lopes 1982). Without such information, the task is arguably ill-defined. Moreover, Lopes and Oden demonstrate in their own study that the participant’s accuracy is significantly improved given even minimal information about the non-random device.

  • Kahneman, Daniel, and Amos Tversky. 1972. Subjective probability: A judgment of representativeness. Cognitive Psychology 3:430–454.

    DOI: 10.1016/0010-0285(72)90016-3Save Citation »Export Citation »E-mail Citation »

    A classic paper on the topic of people’s misperceptions, not just of randomness but also of other statistical concepts. It advances the representativeness heuristic, which in the context of randomness, amounts to a mistaken belief that even small samples should exhibit the overall characteristics of a long-run sequence generated by a random source. See also Theoretical Accounts.

    Find this resource:

  • Lopes, Lola L. 1982. Doing the impossible: A note on induction and the experience of randomness. Journal of Experimental Psychology: Learning, Memory, and Cognition 8:626–636.

    DOI: 10.1037/0278-7393.8.6.626Save Citation »Export Citation »E-mail Citation »

    Lopes shows how people’s supposed misconceptions of randomness can themselves be linked to fundamental problems in the formalization of randomness. She argues also that it is important to take into account the costs of deviations from “optimal,” and suggests that the costs from misperceptions of randomness might be low, when viewed from a signal detection perspective. This is subsequently followed up experimentally in Lopes and Oden 1987.

    Find this resource:

  • Lopes, Lola L., and G. C. Oden. 1987. Distinguishing between random and nonrandom events. Journal of Experimental Psychology: Learning, Memory, and Cognition 13:392–400.

    DOI: 10.1037/0278-7393.13.3.392Save Citation »Export Citation »E-mail Citation »

    An experimental study of randomness perception. Participants are given sequences to judge and asked to indicate whether they are from a random process or not. Demonstrates how providing participants with minimal information about the characteristics of the non-random alternative significantly increases performance. Importantly also, performance is compared to that of an optimal judge for this task.

    Find this resource:

Other Methods

Olivola and Oppenheimer 2008 provides an interesting demonstration of one way in which people’s beliefs about randomness might be elicited indirectly without the need for either explicit sequence generation or judgment. One of Olivola and Oppenheimer’s studies presented participants with sequences and, subsequent to presentation, asked them simply to recall the sequences. Because human memory is, in part, a reconstructive process that does not simply “read off” a past record, participants’ recollection differed systematically depending on whether they had been told the sequences had been generated by flipping a coin or by a complex algorithm with internal order. Participants in the coin condition (but not in the algorithm condition) remembered the streaks to be shorter than they had actually been. Finally, researchers have conducted a variety of tests for evidence of negative recency and hence implicit belief in the Gambler’s Fallacy in a number of real-world data sets ranging from actual betting data to investment contexts, which are discussed in The Real World.

Hot Hand Fallacy

Conceptually, the “hot hand fallacy” is a mirror image of the Gambler’s Fallacy. Specifically, it is the belief that streaks or runs are (at least up to a point) more likely to continue than not. In other words, whereas the Gambler’s Fallacy involves an expectation of negative recency, such that just observed outcomes are less likely to be observed on the next trial, the hot hand fallacy involves an expectation of positive recency. The hot hand fallacy, too, is linked to perceptions of randomness because, according to the classic study Gilovich, et al. 1985, people perceive patterns, in particular in the context of skilled behavior, where there are none: although it is widely believed that sportsmen or women can be “on a roll,” “on fire,” or have “a hot hand,” their analysis of data from professional basketball players found no evidence for this. This absence of true positive recency in the data makes belief in it fallacious. Whether there really is no evidence that skilled performance in sport is subject to streaks has been questioned. For one thing, the classification of relevant aspects of sporting behavior as random is based on an inability to find evidence in favor of genuine patterns of elevated performance. Consequently, such a finding could be based simply on the use of insufficiently sensitive statistical tests (Miyoshi 2000). Furthermore, subsequent research (for a review, see Bar-Eli, et al. 2006) has confirmed the analyses of Gilovich, et al. 1985 in some sporting domains, while finding evidence for a genuinely “hot hand” in others. All of this leaves the question of whether or not observers actually possess reliable evidence when inferring the presence of a true “hot hand” on any given occasion, and in that sense should be considered justified in that belief (see also Keren and Lewis 1994). Within the lab, finally, it has been demonstrated repeatedly how participants will shift between Gambler’s Fallacy and “hot hand” as a function of instructional manipulation (Ayton and Fischer 2004; Burns and Corpus 2004).

  • Ayton, Peter, and Ilan Fischer. 2004. The hot hand fallacy and the gambler’s fallacy: Two faces of subjective randomness? Memory & Cognition 32:1369–1378.

    DOI: 10.3758/BF03206327Save Citation »Export Citation »E-mail Citation »

    An experimental, prediction-based examination into hot hand and Gambler’s Fallacy. Behavior that is congruent with the Gambler’s Fallacy is observed in participants’ predictions of outcomes for a (simplified) simulated roulette wheel. Patterns congruent with the hot hand fallacy are observed in their own confidence regarding these predictions.

    Find this resource:

  • Bar-Eli, Michael, Simcha Avugos, and Markus Raab. 2006. Twenty years of “hot hand” research: Review and critique. Psychology of Sport and Exercise 7:525–553.

    DOI: 10.1016/j.psychsport.2006.03.001Save Citation »Export Citation »E-mail Citation »

    A critical review of inquiry into the “hot hand” in sporting contexts. The paper also contains a review of critiques of the statistical tools used to assess momentum in skilled performance.

    Find this resource:

  • Burns, Bruce, and Brian Corpus. 2004. Randomness and inductions from streaks: “Gambler’s fallacy” versus “hot hand.” Psychonomic Bulletin and Review 11:179–184.

    DOI: 10.3758/BF03206480Save Citation »Export Citation »E-mail Citation »

    The paper demonstrates how descriptions of the generating source moderate the degree of positive recency observed in participants’ outcome predictions.

    Find this resource:

  • Gilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology 17:295–314.

    DOI: 10.1016/0010-0285(85)90010-6Save Citation »Export Citation »E-mail Citation »

    A classic paper. Analyzes data from real basketball games in search of evidence of “hot hands” and even “hot nights.”

    Find this resource:

  • Keren, Gideon, and Charles Lewis. 1994. The two fallacies of gamblers: Type I and Type II. Organizational Behavior and Human Decision Processes 60:75–89.

    DOI: 10.1006/obhd.1994.1075Save Citation »Export Citation »E-mail Citation »

    An experimental investigation concerning people’s beliefs about their ability to detect a bias for a given number in an imperfect roulette wheel. Participants seem to overestimate the reliability of evidence for bias that they could obtain.

    Find this resource:

  • Miyoshi, Hiroto. 2000. Is the “hot-hands” phenomenon a misperception of random events? Japanese Psychological Research 42:128–133.

    DOI: 10.1111/1468-5884.00138Save Citation »Export Citation »E-mail Citation »

    The author constructed an artificial data set that contained genuine “hot hand” periods, but was otherwise matched (in terms of total number of observations, probability of successful shot, etc.) to Gilovich and colleagues’ data set based on real basketball games (see Gilovich, et al. 1985). The author then demonstrates that the statistical tests applied by Gilovich and colleagues are relatively poor at identifying the “hot hand” periods, detecting only 12 percent on average.

    Find this resource:

Theoretical Accounts

A number of theoretical proposals have been put forward as explanations of the Gambler’s Fallacy. Though there is considerable variation in detail, there are several recurrent (and interrelated) themes that run through such explanations. The first is the idea that people have a misconception about randomness in that they believe random processes must be self-correcting in order to sustain (accurately perceived) global properties of long-run random sequences. Bernoulli’s Law of Large Numbers (see Beltrami 1999 under Why the Gambler’s Fallacy is a Fallacy) establishes that, for an unbiased coin, it is possible to stipulate a small bound on the extent to which the proportion of heads observed in a sample will deviate from 0.5, and that as the sequence length n increases, it becomes increasingly likely that the observed deviation will indeed lie within that bound. In other words, the long-run average is very nearly equal to the probability for sufficiently large n. However, this stems simply from the fact that as sequences get longer, the set of all possible sequences (as can be derived via a combinatorial tree) of that length will contain a greater and greater proportion of sequences containing roughly equal numbers of heads and tails (see Randomness). Any individual sequence one might generate corresponds to randomly selecting one sequence from the set of all possible sequences; consequently, it will be more and more likely to be a sequence that contains roughly equal proportions also. It is not in any sense based on the fact that an unfolding sequence is subject to some kind of equilibrium process by which it will “become” more equally proportioned as it continues. People may, in a sense, hold a mistaken belief in such an equilibrium process for a number of reasons. One potential reason is that they fail to distinguish between sequences based on independent events—that is, sampling with replacement—and sequences generated from sampling without replacement. A second (and related) possibility is that people underestimate how different, in statistical terms, short sequences are from very long sequences, and so incorrectly impose on one the properties of the other. Alternatively, the seeming bias and misperception may stem from information-processing limitations, in particular limitations to short-term memory. Each of these possibilities has been pursued within the literature.

Sampling with and without Replacement

One explanation for the Gambler’s Fallacy is that people do not distinguish properly between sequences based on independent events—that is, sampling with replacement—and sequences generated from sampling without replacement (Bar-Hillel and Wagenaar 1991; Ayton, et al. 1989; Rabin 2002). Drawing red and black balls from an urn, where the selected ball is returned to the urn before the next draw, is an example of the former, while making those draws without replacing the balls after each draw is an example of the latter. The Gambler’s Fallacy is a fallacy only in the context of sampling with replacement; when sampling without replacement, each draw of black increases the chance of drawing red on the next trial because the stock of black balls in the urn is depleted. So, one possible explanation for the fallacy could be that people mistake one for the other, possibly because they have more experience of sampling without replacement in the real world.

Representativeness

A second (and related) possibility is that people underestimate how different, in statistical terms, short sequences are from very long sequences. In other words, they underestimate how much the sequence average is likely to deviate from probability for short (or even medium-length) sequences. Consequently, they wrongly attribute long-run properties (as captured by the Law of Large Numbers) to short sequences also. Such a misperception underlies Kahneman and Tversky 1972’s “representativeness” account, which has been given a variety of different formalizations (see Rabin 2002, Rapoport and Budescu 1997).

  • Kahneman, Daniel, and Amos Tversky. 1972. Subjective probability: A judgment of representativeness. Cognitive Psychology 3:430–454.

    DOI: 10.1016/0010-0285(72)90016-3Save Citation »Export Citation »E-mail Citation »

    A classic paper. Kahneman and Tversky maintain that people conflate the properties of large and small samples and believe that “the law of large numbers applies to small numbers as well.” This error underlies their intuitions about randomness, and leads them to base their judgments of randomness on the extent to which a small sample seems representative of the larger parent population.

    Find this resource:

  • Rabin, Matthew. 2002. Inference by believers in the Law of Small Numbers. Quarterly Journal of Economics 117:775–816.

    DOI: 10.1162/003355302760193896Save Citation »Export Citation »E-mail Citation »

    A model of sequence perception which implements “representativeness” and belief “in the Law of Small Numbers” through sampling without replacement. Otherwise the model is fully Bayesian. The model draws overly strong inferences about the underlying rate from short sequences and explains both short-run underreaction by investors and long-run overreaction, as well as a tendency to see variation in rates where there is none (e.g., in the performance of mutual fund managers).

    Find this resource:

  • Rapoport, Amnon, and David V. Budescu. 1997. Randomization in individual choice behavior. Psychological Review 104:603–617.

    DOI: 10.1037/0033-295X.104.3.603Save Citation »Export Citation »E-mail Citation »

    Rapoport and Budescu formalize the notion in Kahneman and Tversky (1972) of local representativeness and couple it with a stochastic element to allow the generation of random sequences. The model is then compared in detail to behavioral data. The model can also be seen as formalizing Kareev’s (1992) notion (see The Role of Memory) that participants seek to generate a “typical” sequence while operating under short-term memory limitations.

    Find this resource:

The Role of Memory

A third strand of explanation of the Gambler’s Fallacy focuses on the role of information processing limitations, in particular memory (Baddeley 1966; Kareev 1992; Bar-Hillel and Wagenaar 1991, cited under General Overviews). Human beings have limited short-term memories. As a consequence, there is a limit on the length of sequence they can actively hold in mind. As Kareev 1992 demonstrated, this could lead to behavior deemed indicative of the Gambler’s Fallacy even where the underlying concept of randomness is correct. One source of evidence for the Gambler’s Fallacy is that people produce sequences with alternation rates (transitions from heads to tails and vice versa) that are higher than the long-run average. This may stem from a belief that “streaks” or “runs” are unlikely, and that with each additional heads a subsequent tails becomes more likely—in other words, a mistaken belief in the Gambler’s Fallacy. However, as Kareev demonstrates, short sequences inherently have higher alternation rates than do long sequences. This mathematical fact means that even an unbiased process generating short sequences will “over-alternate.” An agent with a limited short-term memory, will, effectively be generating a long sequence of random outputs by stringing together short sequences, such as can be held in working memory (for one way of modeling such a process, see Rapoport and Budescu 1997), and consequently will exhibit over-alternations for long sequences also. Hahn and Warren 2009 demonstrates how short-term memory limitations may affect not just the generation of but also the perception of random sequences in a way that brings together several of the above strands. While it is true that all possible sequences of length n are equally likely outcomes of n flips of a coin, they are not equally likely as (local) subsequences within a longer (global) sequence: if one starts flipping a coin, the average number of coin tosses one has to wait before encountering the sequence HHHH is considerably longer than the average wait time for the sequence HHHT. Given our short-term memory limitations, however, our actual experience of unfolding sequences will be akin to a fixed-length sliding window moving through the overall data stream, in both sequence production and perception. The local subsequences that appear in that moving window differ in how they occur. Moreover, the corresponding experience of sampling with and without replacement is remarkably similar.

  • Baddeley, Alan D. 1966. The capacity for generating information by randomization. Quarterly Journal of Experimental Psychology 18:119–129.

    DOI: 10.1080/14640746608400019Save Citation »Export Citation »E-mail Citation »

    A series of studies demonstrating how the generation of random sequences is affected by information-processing limitations.

    Find this resource:

  • Hahn, Ulrike, and Paul A. Warren. 2009 Perceptions of randomness: Why three heads are better than four. Psychological Review 116:454–461.

    DOI: 10.1037/a0015241Save Citation »Export Citation »E-mail Citation »

    The paper argues that seeming biases and errors in people’s perception of the outputs of random generating sources reflect subjective experience of such outputs. Experiential limitations in the form of limited short-term memory capacity and likely limits on the overall amount of data experienced mean that, within that experience, a sequence such as HHHH is less likely to be encountered than HHHT.

    Find this resource:

  • Kareev, Yaakov. 1992. Not that bad after all: Generation of random sequences. Journal of Experimental Psychology: Human Perception and Performance 18:1189–1194.

    DOI: 10.1037/0096-1523.18.4.1189Save Citation »Export Citation »E-mail Citation »

    Kareev shows how short sequences (i.e. up to a length of ten) show higher alternation rates than the long-run expected average. He then uses this to argue that over-alternations in generation tasks are a statistical artifact of the interaction between participants’ attempts to produce a “typical sequence” and short-term memory limitations. Hence the generation of over-alternations should not be taken to indicate that participants believe in the Gambler’s Fallacy.

    Find this resource:

  • Rapoport, Amnon, and Daniel V. Budescu. 1997. Randomization in individual choice behavior. Psychological Review 104:603–617.

    DOI: 10.1037/0033-295X.104.3.603Save Citation »Export Citation »E-mail Citation »

    Rapoport and Budescu formalize the notion in Kahneman and Tversky (1972) (see Representativeness) of local representativeness. The model, which is compared with behavioral data, can also be seen as formalizing Kareev’s (1992) notion that participants seek to generate a “typical” sequence while operating under short-term memory limitations.

    Find this resource:

Randomness, Algorithmic Complexity, and Ease of Encoding

A fourth and final type of explanation of the Gambler’s Fallacy draws on theoretical approaches to randomness that characterize randomness not as a property of a generating source, but as a property of sequences themselves. Specifically, Falk and Konold 1997 suggests that people judge randomness in a manner akin to theoretical accounts of randomness based on compression and algorithmic complexity (see Randomness). The more “regular” a sequence, the easier it is to encode either in verbal description or in memory. Hence memorability of sequences provides a proxy for randomness. This creates a bias against any kind of uniformity, including runs or streaks, thus providing an (indirect) explanation of the Gambler’s Fallacy.

  • Falk, Ruma, and Clifford Konold. 1997. Making sense of randomness: Implicit encoding as a basis for judgment. Psychological Review 104:301–318.

    DOI: 10.1037/0033-295X.104.2.301Save Citation »Export Citation »E-mail Citation »

    An encoding-based account of people’s perceptions of randomness in the spirit of algorithmic complexity theory. “Regular” sequences are easy to encode (in description or memory) and are thus judged to be non-random. This means that long streaks or runs, like other regularities, are viewed as indicative of non-randomness.

    Find this resource:

The Real World

As the name suggests, the Gambler’s Fallacy has always been intimately tied to gambling behavior. The so-called d’Alembert system (after the 18th-century French philosopher and mathematician Jean le Rond d’Alembert) is possibly the most famous system for playing roulette. The player chooses an outcome, say black, an initial bet, and an increment; if black comes up on the first trial, then on the next trial the player places a reduced bet on the same outcome. If black did not come up, then an incrementally increased bet is placed on it for the next trial. One still finds this system recommended on present-day online gambling sites, even though it is based on the Gambler’s Fallacy—that is, the mistaken belief that occurrences of black make red more likely on the next trial, because the sequence must tend toward equal proportions of both outcomes. Of course, the system cannot provide the gambler with an advantage over the house, because each trial is independent of the previous one. Gambling itself is subject to considerable research attention.

Problem Gambling

Pathological gambling (for which diagnostic criteria are provided in the American Psychiatric Associations’ Diagnostic Manual, known as the DSM-IV) is perceived to be both on the increase and associated with a range of undesirable personal and societal problems (Petry and Armentano 1999). This has led to research seeking to identify the causes of gambling, among them mistaken beliefs about randomness such as the Gambler’s Fallacy (Wagenaar 1988; Toneatto, et al. 1997; Toneatto and Ladouceur 2003). Evidence for a cognitive basis of or at least contribution to pathological gambling has been sought in interviews, “think out loud studies” where gamblers verbalize their thoughts during gambling, and in intervention studies that seek to correct erroneous beliefs.

  • Petry, Nancy M., and Chris Armentano. 1999 Prevalence, assessment, and treatment of pathological gambling: A review. Psychiatric Services 50:1021–1027.

    Save Citation »Export Citation »E-mail Citation »

    A widely cited review of the literature on prevalence, assessment, and treatment of pathological gambling between 1984 and 1999.

    Find this resource:

  • Toneatto, Tony, and Robert Ladouceur. 2003. Treatment of pathological gambling: A critical review of the literature. Psychology of Addictive Behaviors 17:284–292.

    DOI: 10.1037/0893-164X.17.4.284Save Citation »Export Citation »E-mail Citation »

    Review of treatments, including treatments based on addressing faulty cognitions such as the Gambler’s Fallacy.

    Find this resource:

  • Toneatto, Tony, Tamara Blitz-Miller, Kim Calderwood, Rosa Dragonetti, and Andrea Tsanos. 1997. Cognitive distortions in heavy gambling. Journal of Gambling Studies 13:253–266.

    DOI: 10.1023/A:1024983300428Save Citation »Export Citation »E-mail Citation »

    A report of interviews with heavy gamblers aimed at identifying the presence of cognitive biases and erroneous beliefs.

    Find this resource:

  • Wagenaar, Willem A. 1988. Paradoxes of gambling behaviour. London: Erlbaum.

    Save Citation »Export Citation »E-mail Citation »

    Wagenaar hypothesized that cognitive distortions play an important role in gambling behavior. Specifically, Wagenaar sought to integrate research on gambling with the heuristics and biases tradition of psychological judgment and decision-making research founded by Amos Tversky and Daniel Kahneman. One of the best-known of these purported cognitive heuristics is the representativeness heuristic, which in the context of random processes gives rise to the Gambler’s Fallacy (see Theoretical Accounts).

    Find this resource:

Evidence in Real World Gambling Tasks

The problems gambling can cause have also led to research seeking to provide evidence for the Gambler’s Fallacy outside the laboratory. For example, Clotfelter and Cook 1993 provide evidence for the Gambler’s Fallacy in lottery play, Terrell 1998 provides evidence for the fallacy in the context of greyhound races, and Croson and Sundali 2005 provide data from casinos in support of the fallacy.

  • Clotfelter, Charles T, and Philip J. Cook. 1993. The “Gambler’s Fallacy” in lottery play. Management Science 39:1521–1525.

    DOI: 10.1287/mnsc.39.12.1521Save Citation »Export Citation »E-mail Citation »

    The authors analyzed state lottery bets in Maryland and found evidence that the amount of money bet on a given number significantly fell immediately after its inclusion in a winning draw, in all likelihood reflecting a mistaken belief that it was unlikely to win again in quick succession.

    Find this resource:

  • Croson, Rachel, and James Sundali. 2005. The Gambler’s Fallacy and the hot hand: Empirical data from casinos. Journal of Risk and Uncertainty 30:195–209.

    DOI: 10.1007/s11166-005-1153-2Save Citation »Export Citation »E-mail Citation »

    The authors examined the betting behavior of 139 roulette players in Reno, Nevada. Evidence for Gambler’s Fallacy-driven behavior in the form of responsiveness to runs (streaks) was obtained. After a run of, say, black, players became more likely to bet red. Statistically significant deviations were observed following runs of length five or more, with the magnitude of deviation increasing with the length of the run.

    Find this resource:

  • Terrell, Dek. 1998. Biases in assessments of probabilities: New evidence from greyhound races. Journal of Risk and Uncertainty 17:151–166.

    DOI: 10.1023/A:1007771613236Save Citation »Export Citation »E-mail Citation »

    Terrell examined data from almost 10,000 greyhound races at Woodland Park, Kansas, in the years 1989 and 1994 and found consistent evidence for the Gambler’s Fallacy. Here dogs are randomly assigned a starting post or position; analysis of betting odds suggested that bettors underestimated the probability of a repeat win from the same post.

    Find this resource:

Economic Behavior

A number of economic behaviors have been linked to economic Gambler’s Fallacy (and also the Hot Hand Fallacy): for example, the tendency of investors to hold onto losing stocks too long and to sell gaining stocks too quickly, presumably because the investor expects a reversal in random events (the so-called Disposition Effect; see Johnson, et al. 2005); it should be noted, however, that such behaviors typically also have other theoretical explanations.

  • Johnson, Joseph, Gerard J. Tellis, and Deborah J. MacInnis. 2005. Losers, winners and biased traders. Journal of Consumer Research 32:324–329.

    DOI: 10.1086/432241Save Citation »Export Citation »E-mail Citation »

    An experimental investigation seeking to provide evidence for involvement of the Gambler’s Fallacy in the decisions about whether or not to buy stocks in light of past performance.

    Find this resource:

LAST MODIFIED: 11/29/2011

DOI: 10.1093/OBO/9780199828340-0027

back to top

Article

Up

Down