One of the great pleasures of studying human behaviour is to see that what we find in our experiments, what we theorize in our papers and textbooks – as unlikely and counterintuitive it appears to be – actually predicts what happens in so-called real life. Take, for instance, the current build-up of a stock-market bubble in the UK, happening even more dramatically in the US. In the UK, the FTSE 100 is on its way to surpass the record set during the high times of the dotcom bubble and already surpassed the levels reached during the 2008 financial bubble; in the US the Dow Jones has already reached new record highs. Despite having recently experienced the devastating consequences of a stock market bubble bursting, banks and investors return a few years later to the same hyperbolic forecasts and predictions, and start to build up another bubble. It is as if the past did not exist. Compare this behaviour with the following anecdote, which most business school students probably know.
In the year 1976 a small team of experts in Israel were developing a new high school curriculum for the Ministry of Education. After a year of working they met to determine how much time they required to finish the project. Each member wrote on a piece of paper the number of months they thought was needed. The predictions ranged from 18 months to 30 months. One of the team members then asked a fellow member, who was a distinguished expert in developing curriculums, to recall other teams just like them, at a similar stage. How long did it take those groups to develop their curriculum? After taking a long pause the expert told the group that 40% of similar teams gave up on the project all together. As for the remaining 60%, they completed the curriculum within seven years. The members wanted to know if the expert believed their team was exceptionally skilled and thus likely to finish the task sooner. The answer was no – the expert evaluated the abilities of the members to be slightly below average. Despite this sober evaluation the team remained highly optimistic that they would finish the project in less than three years. In the end, it took them eight.
Nobel Prize laureate Daniel Kahneman recounted this story when he and Amos Tversky introduced the phenomenon of the planning fallacy (Kahneman & Tversky, 1979). The anecdote comprises all parts that constitute the planning fallacy. First, a team makes overly optimistic predictions for how long it will take to complete a task. Second, they learn the history of comparable tasks, which is rather pessimistic. Third, and this is the most interesting, counterintuitive part: they ignore the past and hold on to their overly optimistic outlook. A vast literature documents projects that failed or never got started because of overly optimistic forecasts, starting with minor software projects and ending with major public developments such as airports and train stations. The “news” that the home country of the Olympics or the World Cup is way behind the time table is a testament to the power of the planning fallacy.
But why do people ignore the past when they try to forecast the future? The answer is that people convince themselves that the past is not relevant this time around. For instance, when software engineers are asked if they use past experience to inform their plans, they responded that “No… because it’s a unique working environment and I’ve never worked on anything like it” or “No, not relevant. It’s not the same kind of project at all” (Buehler et al., 2010). 75% of software projects are completed after the predicted date. When looking into the future, people construct a best-case scenario in which one step inevitably leads to another step, creating a scenario where the impossible appears rather certain.
During the dotcom bubble, people in the financial industry were convinced that the “internet changes everything,” even generating predictions that the stock market would rise from now until forever. In the late 2000’s, the development of new financial innovations (labelled with acronyms that makes the acronym-crazy world of science blush) supposedly changed everything; the market from now on was always priced right and risk was balanced perfectly due to the fruits of labour of financial geniuses (find here an excellent summary). In both cases, investors argued that this time is different, the past is not relevant.
While all this might not be a particularly new story, it has some interesting ethical implications. Of course, once we lose tremendous amounts of money–or the little we had to begin with–we want to blame someone. And professionals should be aware of all these problems, and deserve a fair amount of criticism (and legal action). But when we look at the experiments, one might start to wonder how likely it is that people in the financial industry or in other bubble-prone parts of the economy (housing market, I am looking at you) can overcome their biases. Even when participants in the lab are confronted with the past directly before making predictions about the future, they fail to incorporate this information into their forecasts. Moreover, the power of the planning fallacy increases when people really want the future to be great. While participants in the lab become far more optimistic when there is a chance for, say, £10 more, imagine what happens when there is a chance for £100 million more. Such optimistic forecasts drive not only the behaviour of people betting with other people’s money, but also investors who bet with their own money. Since human forecasts drive the stock market, it seems almost inevitable that bubbles build. The moral accountability of the people in stock markets appears to be restricted by a hard-wired bias to see the future through rose-coloured glasses.
For most people, being human also means being almost incapable of foreseeing bad things. In our everyday lives, this is more often than not a blessing, sheltering us from stress and giving us the joy of anticipation. The ancient Greeks thought that Prometheus is to blame/praise, giving us not only fire (thanks for that), but also taking away our ability to foresee future doom. In their view, humans are more or less incapable of anticipating bad things happening. So, whenever behaviour is morally judged, we should keep in mind how much control the judged actually had over it and whether or not they could have known better.
Thank you for this interesting post, Andreas,
Whilst I agree with much of what you write, there are two clear counter-examples to the view that humans are incapable of imagining bad things happening :
one is nuclear power, the other is GMO. in both these areas there are pleny of intel
sorry – here’s the rest of my response :
…. plenty of intelligent people who believe in only the bad things…
Thanks, and I very much agree. But there is an important distinction to make. You and I, we are optimistic about our future (you about yours, I about mine), but rather pessimistic about the future of other people / the future of one’s country / the human race. So, we often display personal optimism – which drives the stock market – , but prefer global pessimism. Mix this with a certain ideology, you either find that GMO’s are the end of the world, or the European Union, or… you name it.
Something I often like to point out is that every MBA must have encountered the story about the Dutch tulip bubble and all the other financial bubbles in the first few weeks of their economics education. Yet they all believe “this time it will be different”.
But being aware that you are biased seems to imply certain moral duties. It may be a bit like being an alcoholic: many alcoholics are aware that they are alcoholics, that they are unlikely to snap out of it when it would be rational to do so, and that some of the time they act with diminished capacity. That would mean that they ought to avoid placing themselves in situations where they will do harmful things. The alcoholic should (in a sober moment) realize that is better not to have any car, since he is likely to be in situations where he might drunkenly decide to use it, causing danger to everybody. In the same vein, knowing you have a hard to avoid bias, you might rationally try to set up a choice architecture that limits or counteracts your future biased actions (perhaps a Ulysses contract, or just organising so a meeting schedule that includes obligatory debiasing methods).
So in the case of bubbly domains, if the bubbles are actually harmful (it is not apriori clear to me that this has to be true) we should want try to introduce a negative feedback mechanism. We have done similar countermeasures for various biases and pathologies of governance (division of power, oversight systems etc.) The fact that bubbly domains are known to be bubbly should actually be made to matter to participants – ideally, of course, by the participants realizing that they are irrational.
This is a very interesting comment. I would like to mention a few points – based on psychological research – that complicate your line of thoughts.
First, you mention that being aware that you are biased puts some responsibility on a person to address these biases. However, studies in which participants are educated about optimistic biases show that people thereafter believe that it is very likely that other people have such a bias, but not themselves. I would be rather pessimistic about the chances of people being aware of their own biases; something that is extensively addressed in research on prejudice and racism. What further complicates the picture is that an optimistic bias is often advantageous for the individual; at least to a certain degree. An optimistic disposition furthers individual success in a company, based on its ability to foster hard work, and to ensure that your colleagues hold you in high regard. In contrast to an alcoholic, who is constantly confronted with the detrimental consequences of his/her addiction, optimists often experience the opposite. So, if we want to prevent bubbles and other disadvantageous consequences of optimistic biases, relying on the individual seems to be an overly optimistic assessment of people’s ability to gain insight into their mind.
Second, you suggest the introduction of a negative feedback mechanisms. This suggestion touches on one of the puzzling questions about optimism: How do people remain optimistic when constantly confronted with contradicting evidence? Research by Tali Sharot (UCL) suggests that people selectively incorporate good news – news that supports their optimistic view of the future – but disregard bad news. If, for instance, participants learn that it is LESS likely than they thought that they get cancer (good news), they readily change their belief. Yet, when people learn that it is MORE likely than they thought to get cancer (bad news), they barely change their belief. This good/bad news effect on learning suggests that the power of negative feedback is modest to conquer optimism. And the research suggest that people are not aware of this learning bias either, making it hard to modulate.
In conclusion, I think that the capacity of people to conquer their own biases is rather limited, and the prevention of its effects on society should be based in rules and regulations, rather than “psychological education”. Not the most satisfying answer as a psychologist myself, but similar suggestions have been made by psychologists for addressing racial biases.
Anders wrote: “Something I often like to point out is that every MBA must have encountered the story about the Dutch tulip bubble and all the other financial bubbles in the first few weeks of their economics education. Yet they all believe “this time it will be different”.”
But… there are also lots of times when rising asset prices imply something positive about social value. Stocks go up, in the main, because they indicate that people think that the company in question is doing something worthwhile/profitable and they want a piece of the action. The real issue is not that investors think “this time it will be different” but that investors have imperfect information and a can’t tell whether a firm’s stocks really are a good investment. Investors know that there are over-valued assets as well as under-valued assets – surely the difficulty is in distinguishing between the two, rather than acknowledging that over-valued stocks exist, and that one should not (in general) buy them?
As for more general bubbly domains and the belief that “this time it will be different”… financial markets are far from the only place where people might have some of this going on. On campus the other day, a dude from Wellington’s own Socialist Workers’ collective (or party, or faction, or whatever) wanted to give me some printed material about Marxism. As far as socially damaging optimism bias goes, I’m having a hard time thinking of anything with a more dire record than actually existing socialist regimes. But I’m sure the Socialist Worker guy would have a whole bunch of reasons why “next time it will be different.”
Dave, thanks for the comment. I really like the insight about a bit more extreme political tendencies. There might be only one political movement that can keep up with the negative/horrific consequences of communism / socialism on society, and that is fascism / nationalism. Yet, Ukip is on the rise, suggesting that “this time things will be different”. Past doom seems to have little bearing on elections either.
And I agree with you, that investors have imperfect information, and that evaluating if a stock is overvalued / undervalued is rather difficult, if not impossible. However, this observation can still not explain why bubbles build – why investors constantly think that basically every stock will go up / that all the companies are undervalued? If investors would be equally likely to think that a stock is overvalued or undervalued given imperfect information, bubbles would not build. Instead, they tend to believe that, for instance, all the companies selling on the internet pet food, gardening tools, groceries, plastic replicas of the Queen waving her hands are undervalued. And that is what causes the damage. So, buying and selling itself is not the problem (or imperfect information), put constantly erring on one side is what is harmful.
Comments are closed.