This is the third in a series of blogposts by the members of the Expanding Autonomy project, funded by the Arts and Humanities Research Council.
Written By: Oscar A. Piedrahita & Matthew Vermaire, COGITO, University of Glasgow.
Don’t you find that other people’s beliefs are always getting in the way of progress? They seem to be full of bad views about everything from geopolitics to zoning laws to the most bizarre conspiracy theories; and what’s worse is that they seem often perversely immune to rational methods of persuasion, bristling with a panoply of biases. It’s a free country and everyone’s entitled to their opinions. Wouldn’t it be nice, though, if—without having to resort to positively illiberal measures of censorship and forced re-education—we could get those opinions to be a little more tolerable? What if the secret is all in the way in which evidence and potential beliefs are presented to people, so that with more carefully calibrated interventions we could exert a noncoercive but significant influence toward the truth?
Behavioral economics may already have provided just the tool for our times: the nudge. In Richard Thaler and Cass Sunstein’s original treatment of nudges, first published in 2008, they’re presented as just such noncoercive influences: a nudge is “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (2021, introduction). Examples include methods for making options more salient to decision-makers—placing healthy foods at eye level in grocery stores and cafeterias, say, to get people to buy more of them—or using interactivity and gamification to make mundane tasks more engaging. (Perhaps you could reduce litter by making waste bins feel more like basketball hoops.) It’s a subtle approach that aims to guide behavior without taking away an individual’s freedom to make their own decisions.
As those examples suggest, nudges have usually been conceived of as ways of influencing people’s behavior, rather than their beliefs. Once the notion is on the table, though, it’s natural enough to ask about the possibility of doxastic nudges, nudges for influencing belief. Indeed, since changing people’s beliefs is normally a good way to change their behavior, it wouldn’t be surprising to find that many nudges for behavior work by nudging belief, and that, accordingly, doxastic nudges are being administered to us all the time by advertisers, policymakers, politicians, and influencers of all sorts.
Here’s an example of that kind of intervention, based on framing effects—differences in how people respond to the very same options based on how those options are presented. Suppose you’re considering having a serious operation, and you ask about the risks involved. You get one of the following responses:
- Of a hundred patients who have this operation, ninety are still alive after five years.
- Of a hundred patients who have this operation, ten are dead within five years.
In a sense, the information these responses give you is exactly the same; only the way of phrasing it differs. According to Thaler and Sunstein, though (2021, p. 39), which response you get could make a big difference for whether you end up going through with the procedure; and it seems plausible that this is because they have different effects on what you believe about it. Response A sounds rather cheery: you might go away from it believing that the operation is safe. Response B, on the other hand, feels alarming; the thought of those fatalities may make you more likely to find the operation dangerous. If your doctor knows this, and if she has an interest in your choice—maybe she’ll be paid more if you opt for it; or maybe she privately thinks it would be better for you to avoid it, and wants the best for you—she might even intentionally choose one way of framing the risks rather than the other. Then she’d be nudging you to believe that the operation is safe or that it’s dangerous.
If that seems inappropriate to you, then you’re in good company. Many writers have worried that behavioral nudges are morally problematic, even when they aim to promote the welfare of the people they influence, because they fail to fully respect them as autonomous agents. We see here that this worry carries over to doxastic nudges, too: it might seem that exploiting framing effects and similar psychological mechanisms to foster beliefs in people is a violation of their epistemic autonomy, their right to make up their own minds. At the same time, though, we see why nudges could be attractive tools for changing people’s minds. Suppose we want fewer people to get the operation, to relieve stress on the medical system, and so we train doctors to offer Response B. If a patient gets that response, and as a result comes to believe that the operation is dangerous, what does he have to complain about? He certainly wasn’t forced to believe that the operation was dangerous, nor was he misled: the information he got was just as true as what he would have had from Response A. If it was silly of him to be influenced by the framing of the response, isn’t that his own fault?
Other examples of doxastic nudges might include the use of flattering or unflattering images of public figures or criminal defendants in newspapers and other media, to suggest interpretations of their character; changing the order in which people encounter evidence in hopes that they’ll focus on certain parts of it; and many other tools of PR campaigns, advertising, and everyday speech. (Our topic has a good deal of overlap with the study of rhetoric generally.) In a given case, though, it can be hard to say whether a persuasive strategy functions as a nudge or simply as a way to give evidence or make an argument. Such calls are especially hard to make because some basic conceptual questions about nudges have yet to be clarified: the category has been given precise definitions in different ways (e.g. Saghai 2013; Zorzetto & Ferraro 2019; Parmer 2023), with no account winning wide acceptance.
Other important questions about nudges are empirical. Just how effective are they, for instance, and in which contexts? Could doxastic nudges, in particular, really be an effective countermeasure against misinformation, biases, and other epistemic ills, if deftly employed? And how do they work, anyways? It should be noted that not everyone thinks the influence of nudges (behavioral or doxastic) is irrational, and to some extent it is a matter for empirical psychology to investigate. Perhaps they constitute a form of testimony, signaling what the people around us believe or prefer. It’s common, though, to view them as ways of hijacking less reputable mechanisms of thought, short-circuiting our processes of rational reflection. If that’s the better picture, at least, then the ethical questions raised above will remain significant, alongside the empirical ones: is it cynically manipulative to persuade people by triggering their irrationality, even if it’s for their own good? When it comes to doxastic nudging, particularly epistemic questions also arise. Could a belief you give someone by nudging them be epistemically justified? Could it count as knowledge? And if we really are the sorts of creatures that can be predictably affected by irrational influences, what exactly are the rules of engagement for permissible persuasion?