By Rebecca Brown
Many people will be broadly familiar with the ‘heuristics and biases’ (H&B) program of work, made prominent by the psychologists Amos Tversky and Daniel Kahneman in the 1970s. H&B developed alongside the new sub-discipline of Behavioural Economics, both detailing the ways in which human decision-makers deviate from what would be expected of homo economicus – an imaginary, perfectly rational being that always aims at maximising utility. For instance, in a famous experiment, Tversky and Kahneman gave people the following information (1983: 297):
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Participants were then asked which of the two alternatives was more probable:
1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.
The majority of people picked 2., despite the fact that the probability of two events occurring (i.e. Linda being a bank teller and her being active in the feminist movement) must always be less than or equal to the probability of either one occurring (i.e. Linda being a bank teller or Linda being active in the feminist movement). By answering that 2. is more likely than 1. people commit the ‘conjunction fallacy.’ It was argued by Tversky and Kahneman that the reason why people choose 2. over 1. is due to the use of a ‘representativeness heuristic’ – whereby the description of Linda in 2. seems more representative of the description of her in the vignette, and so seems more likely to be accurate.
Another example (Kahneman 2011: 7):
An individual has been described by a neighbour as follows: “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a needs for order and structure, and a passion for detail.” Is Steve more likely to be a librarian or a farmer?
Now, it might be relevant that the description of Steve fits with that of a stereotypical librarian. But it is also relevant how likely any given person is to be a librarian versus a farmer. Tversky and Kahneman estimate that there are more than 20 male farmers for each male librarian in the United States, making it much more likely that Steve is a (meek and tidy) farmer than a librarian. Again, most people seem to use a heuristic that suggests, since Steve sounds more like a librarian than a farmer, the former is more likely. In doing so, they neglect the base rate of farmers versus librarians, and so miss out on crucial information when making their judgement.
These examples – and numerous others provided by the H&B program of work – are extremely compelling. It has led to the assumption amongst many that humans frequently stumble into reasoning mistakes due to the use of fast and frugal heuristics and the operation of cognitive biases. To correct for these biases, it is thought, we must either exercise our slower, deliberative capacities more frequently, or rely upon benevolent ‘nudges’ to shape our decision-making and behaviour in helpful ways.
There is, however, another way of thinking about the operation of H&B. Gerd Gigerenzer, another psychologist, has argued that the image of heuristics took a negative bend in the 1970s with the rise to prominence of the H&B program. Moreover, Gigerenzer argues that this negative view of heuristics is undeserved. The criticism directed at the heuristics described is that they fail to obey axiomatic rationality. That is, they fail to conform to abstract axioms, such as Completeness (an agent must always have a weak preference for one option over an alternative) or Transitivity (if B is preferred to A and C is preferred to B then A cannot be preferred to C). But why should we care about obeying axiomatic rationality? What pull should such abstract axioms as Completeness and Transitivity have over us?
One plausible answer is that obeying such axioms will allow us to make accurate predictions over the world we live in and thus to make decisions and enact behaviours that will promote utility. The H&B program has provided us with numerous examples of cases where people seem to make the ‘wrong’ decision due to the failures of fast and frugal reasoning to facilitate good decision-making. We might expect such ‘wrong’ decisions to result in lower utility.
Gigerenzer, however, thinks that we shouldn’t evaluate heuristics according to the extent to which they conform to the standards of axiomatic rationality. Rather, we should consider heuristics as evolved tools to facilitate appropriate behaviour. That is, given the kind of situations human decision-makers typically find themselves in, with the kind of information they typically have available, and the kinds of consequences that typically attach to the decision they make, how should they reason? Gigerenzer argues that heuristics, whilst they might not exhibit axiomatic rationality, instead show something much more valuable: ecological rationality. Heuristics are tailored to human decision making in order to achieve the goals that matter to human decision-making. This goes beyond coherence and includes predictive accuracy, frugality and efficiency.
Consider another example, this time adapted from Thaler and Sunstein (2008: 36).
Suppose that you are suffering from serious heart disease and that your doctor proposes a gruelling operation.
The doctor can tell you about the outcomes of the operation in one of two ways:
[Gain Frame] Five years after surgery, 90% of patients are alive
[Loss Frame] Five years after surgery, 10% of patients are dead.
Thaler and Sunstein argue that, despite the content of the two ways of framing the outcomes of the operation being exactly the same, people are more likely to recommend / accept the operation when it is framed as a gain than when it is framed as a loss. This, they argue, shows that people are attending to arbitrary features of the way the information is presented, rendering them vulnerable to framing effects and leading them astray in their decision-making.
Gigerenzer’s theory of ecological rationality, however, challenges whether or not this claim is justified. From the perspective that people adopt ecologically rational decision-making heuristics, we might assume that the use of a gain or loss frame is not arbitrary, but itself carries relevant information. Consider the situations where people are asked to make these kinds of decisions. In such situations, the relevant goal of the patient is something like maximising their chance of surviving. To pursue this, Gigerenzer argues, they need to know the answer to the question “Is survival higher with or without surgery?” As the example is set up, we don’t know the risks of not having surgery. However, by framing the option as a loss or a gain – by highlighting the chance surgery results in survival or death – the doctor conveys information that the decision maker implicitly understands. That is, ecologically rational decision-makers might infer from the doctor’s decision to provide the information with a gain (/loss) frame that the doctor recommends surgery (/recommends against surgery) as being in the best interests of the patient. Far from being arbitrary, in the real world it may well be highly relevant whether the doctor chooses to present the information with a gain or a loss frame.
There are other situations where, in the real world, ecologically rational heuristics are as good or better than those that obey axiomatic rationality. This is partly because we often have to make decisions and enact behaviours under conditions of intractability and uncertainty – that is, where the different options and outcomes are simply too numerous to compute the optimum decision, or where information about the consequences of different options cannot be known. In such cases, humans have evolved handy rules of thumb in the form of heuristics which might result in better decisions than can be achieved via much more long-winded and energy intensive computations.
Philosophers tend to be fairly wedded to the idea that rationality, including coherence and consistency, is always desirable. But we need to think carefully about whether or not this is always the case. There may well be practical examples where loosening the strictures of axiomatic rationality results in much better outcomes. If this is so, it is not clear that we should demand decision procedures conform to abstract axioms rather than pragmatic, frugal, and efficient heuristics.
References
Gigerenzer 2019 ‘Axiomatic Rationality and Ecological Rationality Synthese 3(2):1-18
Gigerenzer 2015 ‘On the Supposed Evidence for Libertarian Paternalism’ Review of Philosophy and Psychology 6(3):361-383
Kahneman 2011 Thinking, Fast and Slow London: Penguin
Thaler and Sunstein (2008) Nudge: Improving Decisions About Health, Wealth and Happiness London: Penguin
Tversky and Kahneman 1983 ‘Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment’ Psychological Review, 90: 293-315