During my master’s research on human enhancement I did a lot of talks about cognitive enhancement to the general public. Back then I compiled a list of recurring biases I noticed during the subsequent discussions, as well as some tentative techniques to solve them. The paper “Cognitive biases can affect moral intuitions about cognitive enhancement” already explores the possible effects of some of the biases on my list: status quo bias, loss aversion, risk aversion and omission bias; besides those four, the ones that I more often came across were:
Zero risk bias
This was by far the most glaringly recurring one. It might be a mixture of status quo bias and risk aversion, but I don’t know the name of any bias in the cognitive bias literature which specifically matches it. So this might be one likely to be overlooked.
People would compare cognitive enhancer’s risks with absence of risks. If it had a risk greater than zero, they would mentally classify it as risky. However, this overlooks two things. Firstly, not taking a cognitive enhancement also has several risks. Sandberg, A., & Savulescu, J. (2011) notice how many deaths, accidents, injuries and so on are caused by decisions of cognitive deprived individuals. Secondly, most people committing this bias were already on a cognitive enhancer, which was known to be pretty risky, namely, caffeine.
My audience would often get alarmed by the fact one study on modafinil reported a slight rise in blood pressure, while forgetting they were already taking caffeine, a drug several studies already found a bigger effect on blood pressure. One should compare modafinil’s rise in blood pressure with caffeine’s (a drug modafinil would be likely to replace) and not with no effect on blood pressure whatsoever.
Another example, people would argue against the use of a new drug saying it could have a yet unproven long-term side effect while forgetting they were already taking a drug which has proven long-term side effects of its own. The right analysis should compare “the probability that the new drug has unknown side effects” with “the incidence of the old drug known side effects plus the probability that the old drug has unknown side effects”.
Anchoring bias
Subjects who are initially asked whether the number of UN African countries is more or less than 15 will later generate much smaller absolute estimates than subjects who were initially asked whether the number was more or less than 65.
In the same way, the first question we ask when pondering the use of a drug can have an anchoring effect on a later analysis. If we first ask what the risks of the new drug are, we would anchor later responses about the desirability of using the drug in this initial risk assessment, biasing towards a conservative response. The same could happen when asking about the benefits of a new drug, biasing towards a non-conservative response or when asking about the benefits/risks of the status quo, biasing towards a conservative/non-conservative response.
One possible solution is to frame the initial inquiries more generally, perhaps as “Should we use the new drug or not?”, and only entering into a specific risk versus benefits analysis later on. Priming our minds for thinking about the risks or benefits of a drug could irreversibly bias our final conclusion.
Statistical format
The way we update our beliefs is partially an adaptation resulting from selective pressures of an ancestral environment where we could only learn about the risks or benefits of something by directly witnessing or hearing about people who were benefited or harmed by it. Updating our beliefs by reading that a drug increases the likelihood of developing high blood pressure by 1% was not, at all, an evolutionary recurring challenge. This means that presenting a lot of scientific data will do little in terms of convincing people to change their habits. Presenting statistics in terms of occurrences – 1 in 10, instead of 10% – can help. However, the probability someone will use a new drug depends more on the success or failure of a user they know than on the overall effects of the drug on a large number of unknown users. One case, full of confounders, has more weight than hundreds of double-blind controlled cases.
Base rate neglect
When taking into account a new study showing risks and benefits of a new drug, one can easily forget about the initial probability estimative one had about the risks and benefits of that drug. Normally, informed initial probabilities should have a bigger say in our final probability estimative than the new data. This bias will make one ignore the results of past studies. Or even ignore the basic question of “What’s the probability that any given new drug will have big risks/big benefits?” Although the past failure to success ratio of pharmacological research has yet to be accessed, there were some big failures (thalidomide), and some big success as well (penicillin), I would believe we have more reasons to be optimistic than not.
Disjunction bias
One might think that a drug with a long list of severe low-probability side effects is safe because he fails to realise those probabilities will add up. The probability of any of them happening could be undesirably high for a severe side effect. This will lead to a generalized underestimation of risks. Moreover, a drug with only one severe side effect of medium probability will be seen as safer than a drug with many low-probability side effects.
Because of the statistical format bias, an improbable description of a particularly unlucky individual suffering from all the side effects mentioned in a long list would impute more caution in the reader’s mind than any long list of severe low-probability side effects occurring in the general population.
As a title ‘Cognitive biases can affect moral intuitions about cognitive enhancement’ states the obvious. But I am sorry I must say I struggled to find anything of any substance in it and found some the reasoning quite bizarre. The example of the ‘gamble’ is very strange given the $800 certain gain is not a gamble. A slight understanding of gambling, probability and rational decision-making would help when discussing the topic. As usual too many of the cited studies use university students (mostly American), which means these studies are far too limited to be used to show and explain decision-making of the ‘public’ or ‘lay-people’ (whoever or whatever they might be).
In your post you say, ‘Although the past failure to success ratio of pharmacological research has yet to be assessed…I would believe we have more reasons to be optimistic than not.’ When it comes to psychopharmacological and neuropsychopharmacology R and D we can assess it as being pretty much a failure. For sure there are exceptions and some psychiatric conditions would be difficult, although not impossible, to manage without the use of drugs. What so called enhancements drugs share with therapeutic psychotropic drugs is the seemingly insuperable problem of side effects. A lot of people have experience of these therapeutic drugs or have knowledge of them from their family, immediate social circle and media and are rightly suspicious of them and, given the history and present practises of Big Pharma, have no reason to be optimistic. Little wonder then, as you put it, ‘the probability someone will use a new drug depends more on the success or failure of a user they know than on the overall effects of the drug on a large number of unknown users. One case, full of confounders, has more weight than hundreds of double-blind controlled cases.’ Why should they thrust the results of double-blind control cases that have been conducted by drug companies or indeed universities? The former have a habit of distorting their research and the latter publish an unending stream of conflicting result.
Obviously most people are going to trust caffeine, which comes in the form of delicious beverages that have been extensively enjoyed for centuries, have few associated side effects and probably have no unknown side effects, against modafinil, which comes in the form of a pill, has just a few years of very limited use and has already racked up a long list of side effects which is still growing. I am sure Big Pharma would call this preference “irrational”, but they would say that, wouldn’t they?
Comments are closed.