You are a public health official responsible for the purchasing of medications for the hospitals within your catchment area in the NHS. Your policies significantly affect which, out of the serpentine lists of heart disease medications, for example, are available to your patients. Today, you must choose between purchasing one of three heart disease medications: Drug A, Drug B, and Drug C. They are pretty similar in efficacy, and all three have been being used for many years. Drug B is slightly less expensive than Drug A and Drug C, but there is emerging evidence that it increases the likelihood that patients will take “bad bets,” i.e. make large gambles when the chance of winning is low (and thus might contribute to large social costs). Drug C costs a tiny bit more than Drug A, but there is some evidence that Drug C may help decrease implicit racial bias. You have been briefed on the research suggesting that implicit racial bias can lead to people making choices that consistently and unintentionally limit the opportunities of certain groups, even when all the involved parties show explicit commitments to social equality. Finally, there is emerging evidence that drug A both helps people abstain from alcohol and dissociates negative emotional content from memories.Which drug should you purchase?
Let us begin to think about this question through the lens of the idea of the “Nudge,” which has exploded onto the public sphere (and blogosphere) since Thaler and Sunstein’s published their book, “Nudge: improving decisions about health, wealth, and happiness.” (see the blog here). I briefly and incompletely introduce nudges here, in hopes that we may soon move on to discuss the kind of “nudge drugs” our thought experiment considers.
Thank you for clicking to read more.
In their interview with amazon.com (yes, amazon apparently conducts interviews), Thaler and Sunstein describe a nudge as: “anything that influences our choices. A school cafeteria might try to nudge kids toward good diets by putting the healthiest foods at front.”
Much of the philosophy of the Nudge rests on the combination of 3 ideas:
1) that we all have unconscious biases – towards choosing items at eye level, or preferring the “default option”, or preferring short-term gains etc. – that impact the decisions we make; these biases are being revealed by scientific research.
2) That institutions, like government, structure the environment in which you make choices (i.e. the cafeteria chooses what you see first, government chooses opt out or opt in policies), which determines the effect of these biases.
3) since the environment must be structured in SOME way, institutions should do so such that the cognitive biases work for, rather than against, people; these nudges would “make dramatic improvements in the decisions people make, without forcing anyone to do anything,” say Thaler and Sunstein on amazon, and add: “We think that it’s time for institutions, including government, to become much more user-friendly by enlisting the science of choice [i.e. psychology and behavioral economics] to make life easier for people and by gently nudging them in directions that will make their lives better […] better investments for everyone, more savings for retirement, less obesity, more charitable giving, a cleaner planet, and an improved educational system.”
The normative force of Nudge rests on society’s ability to change the “choice architecture,” the environment in which the biases operate. (for extended discussion on Nudge Ethics see American Journal of Bioethics)
But what happens if we can change the biases themselves?
Recent research suggests that we can and might already be changing these biases with common medications such as those for anxiety and high blood pressure – and on a grand scale. Preliminary research raises the question of whether statins, the most widely prescribed blood pressure medication, link to cognitive impairment. Oxytocin, sometimes used to increase the let-down of milk during breast-feeding, may promote ethnocentrism. Preliminary research attributes the qualities of Drug A, B, and C in our thought experiment to existing medications (see pt. 4 below). Whether we choose to take one medication or another might already be affecting the way we evaluate risks, and what sort of social biases we have.
Should the NHS be concerned about nudge drugs? In the absence of the knowledge of the “nudge” function of the drugs, the public health official’s decision is easy: 3 drugs of similar efficacy and different cost = choose the cheapest.
Once we start to reveal these not-altogether “medical” side effects, we are faced with a series of difficult questions. A few of these are listed below, and I hope they can help us begin a fruitful discussion on the ethics of Nudge Drugs:
1) Freedom: Thaler and Sunstein argue that nudges help people make good decisions without restricting their choice (though this has been debated). Does shifting the biases and capabilities of the individuals directly (rather than the environment) preserve this freedom of choice? Thaler and Sunstein argue that to count as a nudge, nudges should not be so strong as to manifestly thwart choice; similarly, we could see that drugs might differ in the potency of their nudge effects. But is the worry only that the drugs might have, on average, more potent effects?
2) Limits of health care: while a liberal society might strive to be free of racial biases (explicit or implicit) that limit opportunities for specific groups or strive to minimize the bad economic and social risks that its citizens take, should the NHS spend health funds to promote social equality (or be able make a choice that saves the health budget funds but costs other institutions and individuals?)
3) Requirements to research: if we think that these “nudge effects” of medication are morally relevant, does this create an obligation to rigorously test all drugs for these morally relevant nudge effects? How strong is the obligation and what are the morally relevant nudge effects? Thaler and Sunstein’s definition is massively inclusive…
4) Pleiotropy: what if drugs have multiple and complex nudge effects? Propranolol, for example, actually inspired all three drugs in the thought experiment; one of its clinical applications is heart disease and it may bias people towards bad gambles (and here) and emotional dissociation (though this is being investigated as a treatment for PTSD), as well as decrease alcohol cravings, and decrease implicit racial bias (this last study was conducted at Oxford by Sylvia Terbeck, Guy Kahane, Sarah McTavish, Julian Savulescu, Philip J. Cowen, Miles Hewstone; telegraph, medical news, oxford student, Full paper). An important caveat, however, is that much of this research used acute doses of propranolol and an effect with chronic propranolol would need to be shown for it to be relevant to the clinical population.
What are your thoughts on these complications? are other examples of “Nudge Drugs” and the ethical issues they raise?
Which drug do you think the Public Health Official should choose?
I thank you in advance for your thoughtful comments and discussion.
It seems that A and C dominates B in the sense that B might cause a mildly harmful side effect while the “side effects” of A and C are less likely to hurt the patients. So if the cost difference is small, it would make sense to select one of them. One might argue that the anti-alcohol effect of A benefits the patient, so the typical medical aim of helping the patient rather than non-patients would favour A. But if the benefits seem incommensurable (say that A makes patients less sexist instead), then random assignment or random approval might be the fairest thing: flip a coin.
The limits of health care point is relevant: if social policy is supposed to be implemented by the NHS, then it better get part of that budget too. But conversely, if a visit to the doctor might imply nudges towards social conformity it might discourage some people: such concerns have been raised in regards to a variety of other functions. Being clear about who your doctor is working for is important for establishing trust.
Thank you, Anders, for your thoughtful comment. Applying a patient-centered strategy – that looks at the net benefits for the individual, not society – is very thought provoking. This might be a good rule of thumb that keeps the limits of health care in mind. But it is not clear to me how we can make judgments about whether this kind of social side-effect harms/benefits the person without basing that judgment on a smuggled-in idea about what kind of society we should have.
Let’s look more closely at the “bad bets” side-effect. Risky bets, to many of us, seem like they would harm the individual (and this is why we call them “bad”), but is this only because we have a dominant value of a circumspect society (a society dominated by the turtle’s safe, slow and steady strategy?) On the other side, a few lament that people have become “too risk averse,” lack spontaneity, and because of this, inappropriately prefer “safe bets” of all sorts; when applied to medical research, they argue, this leads to chock-a-block, slow-and-steady progression but never breakthroughs, which is what actually matters. To get breakthroughs, you have to invest in ideas that seem to have very little chance of succeeding; and yes, many attempts will fail, but we should not be afraid of failure. Some cultures have even valorized these “bad bets:” whole generations of heroes in Irish myth gain glory through knowingly taking bad bets, and dying bloodily (and to some, vainly). From this alternate perspective, we might change our minds and say that these bad bets are actually benefits both to society and the individual.
And even if the benefits/harms to the individual were not especially difficult to extricate from our social judgments, it seems like those making decisions in the health care system *should* care about the broader social impact of that health care. We might for that reason look down on the NHS official who flips a coin to choose between two drugs that have the same cost/benefit for the individual but oppositely valenced effects for society.
Of course, giving the patient a choice between A, B or C might be the most moral thing. Of course, then the drug effect would no doubt be amplified by the placebo effect. People might also want to choose the drug that fits most with their existing life – the anti-racist would take the anti-racism drug, the teetotaller would go for the anti-alcohol drug and the risk-taker would go for the bad bets drug. This might actually be an argument against giving them the choice.
The problem is likely not valence but the complexity of weighing outcomes: the official who flips a coin between two drugs that reduce racism or sexism doesn’t seem to be doing anything seriously wrong. It is just that it is unclear which problem is worst and even on what metrics to measure it. So maybe a simpler case would be drug A that works normally and drug B that has the helpful side effect of reducing racism. Is there any reason to say B should be preferred? Here the fundamental ethics of nudging come to the forefront.
There is an important difference between drugs and environmental nudging, and that is that one can ignore the environment but rarely ignore the drug. I can walk over to the doughnuts in the store, but if I am made mildly allergic to doughnuts by a drug I have less choice. It might simply be that the biases introduced by drugs are more pervasive (especially by being invisible and unavoidable) and this reduces my freedom. Something influencing the value of outcomes is different from something influencing the bias towards outcomes, and biases that can be consciously removed are ethically different from biases that cannot be removed.
Comments are closed.