How people are wrong about cognitive enhancement and how to fix it
During my master’s research on human enhancement I did a lot of talks about cognitive enhancement to the general public. Back then I compiled a list of recurring biases I noticed during the subsequent discussions, as well as some tentative techniques to solve them. The paper “Cognitive biases can affect moral intuitions about cognitive enhancement” already explores the possible effects of some of the biases on my list: status quo bias, loss aversion, risk aversion and omission bias; besides those four, the ones that I more often came across were:
Zero risk bias
This was by far the most glaringly recurring one. It might be a mixture of status quo bias and risk aversion, but I don’t know the name of any bias in the cognitive bias literature which specifically matches it. So this might be one likely to be overlooked.
People would compare cognitive enhancer’s risks with absence of risks. If it had a risk greater than zero, they would mentally classify it as risky. However, this overlooks two things. Firstly, not taking a cognitive enhancement also has several risks. Sandberg, A., & Savulescu, J. (2011) notice how many deaths, accidents, injuries and so on are caused by decisions of cognitive deprived individuals. Secondly, most people committing this bias were already on a cognitive enhancer, which was known to be pretty risky, namely, caffeine.
My audience would often get alarmed by the fact one study on modafinil reported a slight rise in blood pressure, while forgetting they were already taking caffeine, a drug several studies already found a bigger effect on blood pressure. One should compare modafinil’s rise in blood pressure with caffeine’s (a drug modafinil would be likely to replace) and not with no effect on blood pressure whatsoever.
Another example, people would argue against the use of a new drug saying it could have a yet unproven long-term side effect while forgetting they were already taking a drug which has proven long-term side effects of its own. The right analysis should compare “the probability that the new drug has unknown side effects” with “the incidence of the old drug known side effects plus the probability that the old drug has unknown side effects”.
Subjects who are initially asked whether the number of UN African countries is more or less than 15 will later generate much smaller absolute estimates than subjects who were initially asked whether the number was more or less than 65.
In the same way, the first question we ask when pondering the use of a drug can have an anchoring effect on a later analysis. If we first ask what the risks of the new drug are, we would anchor later responses about the desirability of using the drug in this initial risk assessment, biasing towards a conservative response. The same could happen when asking about the benefits of a new drug, biasing towards a non-conservative response or when asking about the benefits/risks of the status quo, biasing towards a conservative/non-conservative response.
One possible solution is to frame the initial inquiries more generally, perhaps as “Should we use the new drug or not?”, and only entering into a specific risk versus benefits analysis later on. Priming our minds for thinking about the risks or benefits of a drug could irreversibly bias our final conclusion.
The way we update our beliefs is partially an adaptation resulting from selective pressures of an ancestral environment where we could only learn about the risks or benefits of something by directly witnessing or hearing about people who were benefited or harmed by it. Updating our beliefs by reading that a drug increases the likelihood of developing high blood pressure by 1% was not, at all, an evolutionary recurring challenge. This means that presenting a lot of scientific data will do little in terms of convincing people to change their habits. Presenting statistics in terms of occurrences – 1 in 10, instead of 10% – can help. However, the probability someone will use a new drug depends more on the success or failure of a user they know than on the overall effects of the drug on a large number of unknown users. One case, full of confounders, has more weight than hundreds of double-blind controlled cases.
Base rate neglect
When taking into account a new study showing risks and benefits of a new drug, one can easily forget about the initial probability estimative one had about the risks and benefits of that drug. Normally, informed initial probabilities should have a bigger say in our final probability estimative than the new data. This bias will make one ignore the results of past studies. Or even ignore the basic question of “What’s the probability that any given new drug will have big risks/big benefits?” Although the past failure to success ratio of pharmacological research has yet to be accessed, there were some big failures (thalidomide), and some big success as well (penicillin), I would believe we have more reasons to be optimistic than not.
One might think that a drug with a long list of severe low-probability side effects is safe because he fails to realise those probabilities will add up. The probability of any of them happening could be undesirably high for a severe side effect. This will lead to a generalized underestimation of risks. Moreover, a drug with only one severe side effect of medium probability will be seen as safer than a drug with many low-probability side effects.
Because of the statistical format bias, an improbable description of a particularly unlucky individual suffering from all the side effects mentioned in a long list would impute more caution in the reader’s mind than any long list of severe low-probability side effects occurring in the general population.