As illustrated by several recent events, Mexico suffers from a lack of security. The country holds the world record in kidnappings, with an estimated number of 123,470 people kidnapped just in 2013. In August 2014, the official number of missing people was 22,320. Citizens are fed up and are demanding security, perhaps the most basic good a government should provide. I’ll here discuss what appears to me to be one philosophical mistake about the value of security for people. It’s useful to observe and avoid this mistake, since it pertains to wide range of practically important choices (which I’ll mention at the end).
Mexico’s lack of security is unquestionably a bad thing. First, and most directly, it involves risk of serious harm, such has being robbed, kidnapped, raped, or killed. Second, it causes people to experience fear and anxiety, which are themselves bad, and which in turn result in losses of good things that otherwise would have been enjoyed (according to one report, “[as] a result of the wave of violence in [Mexico], 44% of citizens have stopped going out at night, 25% have stopped taking cabs, and 21% have avoided going out for lunch or dinner”).
I’ll focus on the first, more direct way in which lacking security is bad (set aside the issues about fear and anxiety). Suppose a roaming gang in some area exposes each member of the local population of 10,000,000 people to a 1-in-1,000,000 chance of serious harm (death). When confronted with the choice between funding security to stop the gang and instead funding a rescue crew to save little Bart Simpson who has fallen into a well, people tend to favor the identified child in the well over the gang’s “statistical victims.” This tendency has been studied empirically, and many regard it as yet another bias that human psychology succumbs to. But as Norman Daniels argues, there is room for reasonable disagreement over whether it is justifiable to give priority to identified victims over statistical ones. I won’t attempt to resolve this philosophical disagreement here, but I will outline one natural line of support for the view that we should prioritize identified victims (that I’ve encountered a few times). Here’s how it goes:
- Stage One: While it is true that if you were in fact killed this would constitute a serious harm, it’s not true that if you were exposed to a 1-in-1,000,000 chance of being killed you’d thereby suffer a serious harm. It is plausible that exposure to this risk counts as a small, though perhaps not trivially small, harm.
- Stage Two: It is counterintuitive that many small harms could add up to be as morally significant as one very large harm. For example, it seems hard to believe that someone’s death could be outweighed by lots of headaches each had by a separate person. Perhaps no number of small harms could be worse than one very large harm (death).
- Stage Three: We should prioritize Bart the identified victim over the gang’s statistical victims. After all, if we helped Bart, we’d be sparing someone from one large harm, whereas if we increased security for the gang’s statistical victims, we’d only be preventing many small harms (bringing about many small benefits).
I do not believe that this line of reasoning offers plausible support for the general prioritization of identified victims over statistical victims. Stage One does not seem particularly problematic. Maybe it’s contestable whether exposure to risk counts as a harm, but even if so we could simply replace “small harm” with “small expected harm” and essentially run the line as before. There are good objections to Stage Two (in the form of powerful arguments for the conclusion that death can be outweighed by headaches), but I won’t pursue them here. The crucial mistake, I think, comes in at Stage Three. It’s simply not true that, as a general rule, favoring statistical victims means averting exclusively small harms or conferring exclusively small benefits.
To appreciate this point, consider two ways we could make the gang’s statistical victims more secure: (i) we could hire security forces to defensively patrol the streets which would deter the gang’s activities to some extent, thereby reducing each person’s chance of being killed from 1-in-1,000,000 to 1-in-2,000,000, or (ii) we could hire security forces to aggressively capture the gang.
If we did (i), we’d be bringing about many small benefits, but the gang would likely nonetheless kill some of the 10,000,000 members of the population. On the other hand, if we did (ii), the gang would kill no one; we’d make it the case that some killings that would certainly take place otherwise will in fact not occur. But those are large benefits! Does it matter that the “recipients” of these large benefits are merely statistical, and have not been “selected” for killing at the time of our intervention? If the answer is “yes,” it can’t be supported by the above line of reasoning (following Stages One through Three), since the benefits we’re providing in (ii) are large. (It’s actually debatable whether we would be providing fairly large benefits in doing (i), but in assuming this isn’t the case I’m making an assumption favorable to my opponent, allowing that in some cases favoring statistical victims means conferring exclusively small benefits.)
So the above line of reasoning fails to support the general prioritization of identified victims over statistical victims. It’s a natural line to consider, and one I’ve heard offered on multiple occasions, so I thought I’d go ahead and explain why it doesn’t work. In a nutshell, my reply to this line is this: maybe it’s true that small benefits can’t outweigh large ones, but not all benefits to statistical persons are small.
It’s a further question whether there is any independent and plausible defense of the view that we should prioritize identified victims over statistical ones (a topic for a future post). In addition to things like public security and safety, this philosophical debate bears on a number of issues in practical ethics including whether to give priority to the treatment or prevention of AIDS, and whether to give to charities that “earmark” donations or are otherwise set up so that donations will directly benefit particular people, as opposed to those nonearmarking charities that are typically more cost-effective (note that Caspar Hare has recently written an excellent paper on statistical people and giving to charity).
Today is Thanksgiving, a lovely holiday observed by two relatively secure countries to the north of Mexico. Those of us who do enjoy the benefit of security can remember to be thankful for it.
(I’m grateful to Carissa Véliz for discussion, and please read her informative and thought-provoking post on Mexico and state crimes.)
Hi Theron, just a couple of questions to start with:
1. Why do you call statistical victims victims at all? A statistical victim is a possible victim, and my notion of a possible victim is to a victim what my notion of fake money is to money: “fake” and “possible” don’t function as predicates and thus do not denote a subset of the things denoted by the noun they accompany. Your point about running risks is I think completely consistent with having genuine victims on the one hand and, on the another hand, risks for a given population.
2. As you presented the debate, it is still unclear to me whether all or most participants to the debate believe that there must be some objective fact about genuine victims and statistical “victims” in every scenario representing a choice concerning them, a fact that a priori justifies giving the priority to some over some others at such scenarios.
3. If this is the case, then the debate seems to be premised on a very strong claim. A more pragmatic approach would be to see such choices as demanding us to adopt a good strategy (as in game theory) for minimizing risk on a case by case basis, abstracting away from any fact that does not constitute risk. (I notice that “minimizing risk” is a notion that applies both to little Bart and to statistical “victims”.) For instance, if risk is negative expected utility, the question would be simply: What choice minimizes negative utility. Then if we measure utility by degrees of well-being, and mesure expected utility by a distribution of probabilities over utilities, we are bound to have some cases that favour statistical “victims” (i.e. nuclear disaster, earthquake, very dangerous pandemia mean a higher chance of large decreases in well-beings of statistical “victims”), and some that favour genuine victims (i.e. rape, murder mean a higher chance of large decrease in the well-beings of genuine victims).
4. This approach would also explain why there might be genuine dilemmas (i.e. when no choice is the best one). I don’t see how dilemmas could be explained if we adopt the strong claim mentioned in (2).
So, Theron, I get the sense that you’re actually rejecting Stage One. You say a change from 1/1,000,000 risk to zero risk is a significant benefit to a few, because some will live who would otherwise die. But Stage One avoids treating this as a significant benefit (when it comes to making policy decisions, anyway), instead cashing it out in terms of a small reduction in everybody’s risk level and thus a small benefit. You might claim that there is a significant benefit to those whose lives are saved *on top of* the small risk reduction for everybody (as posited at Stage One) – but that would be double counting.
I figure, Stage One involves a tacit commitment to ex ante harm analysis, whereas your sympathies are with an ex post approach.
Hi Andrews and Owen: thanks for your comments, and I apologize for the delay in responding.
Andrews: (1) Yes, possible victims need not be actual victims, if to be an actual victim one must incur some actual harm. (And *merely* possible victims are not actual victims at all.) But I do not see what’s problematic about the notion of a possible victim (or a probable one), if it just refers to a person who faces some chance of incurring actual harm. (2) I’m not sure how many participants in this debate believe that there’s always some reason to favor helping identified victims over statistical victims. (You could check out that Daniels paper I cited and chase down further citations to get to the bottom of this!) (3) I agree that choices about risk could be handled in this straightforward utilitarian way (maximize expected utility, and/or minimize expected disutility), and that this approach would sometimes imply we should help the statistical victims and it would sometimes imply we should help the identified victims. Of course, those who think we should give greater priority to identified victims over statistical victims (in virtue of the fact that they’re identified versus statistical) would reject this straightforward approach for failing to capture something they deem morally relevant. (4) I am confused. First, I am not entirely sure what you mean by “explain why there might be dilemmas”. Second, I don’t quite follow how the “minimize expected disutility” approach would ever imply a dilemma – it’d presumably always instruct you to pick the option with the lowest (or tied for lowest) expected disutility.
Owen: For Stage One, the idea is that *a single individual’s* incurring a small risk of harm would constitute *at most* a small harm, whether we’re thinking of harm in expected or actual terms. (Stage One also doesn’t by itself say anything about the moral significance of these harms, or how they “count”, it’s only Stages Two and Three that discuss moral significance). So I don’t deny Stage One. I do say that, *in some cases* (e.g., the one where the gang is captured), removing *many* small risks *for a large number of people* simultaneously certainly achieves some large actual benefits for a few people whose identities are unknown ex ante. There are other cases where removing many small risks for a large number of people simultaneously *makes it likely* that some people whose identities are unknown ex ante will receive large actual benefits. My point is only that, in many such cases of favoring the statistical people, we’d achieve some large (actual or expected) benefits. Since Stage Three assumes this is false, I deny Stage Three. (And I can do all this while remaining neutral on whether to morally count benefits/harms ex ante, ex post, or both.)
Thanks for your reply. I think I see what you’re saying. Stage One is technically consistent with two positions: 1) the small risk in itself constitutes a small harm (ex ante harm view) and 2) the small risk of harm does not constitute a harm at all – only actually being killed does (i.e., the ex post view) (and (3), that both risk and outcome constitute harms, but again I think this would be double-counting). The difference, after all, is trivial in the single individual case – both ex post and ex ante, the individual harm is not large. But I do still think you end up implicitly take a view on counting harms/benefits ex post…this is because you count the gains in large-population cases, when deciding on a policy ahead of time, as “a small number of large benefits” (which can outweigh other large benefits) rather than “a large number of small benefits” (which cannot outweigh large benefits, by Stage Two). That only makes sense from an ex post perspective, in the present context. Someone who treated harms ex ante would say that, at Stage Three, we should count the ex ante risks as the converse, a large number of small harms.
Thanks Owen. It’s right that I say there *are* a small number of large benefits (in that quote), but I’d insist that that’s technically neutral on whether/how those large benefits *morally count*. Of course – as you’ve sensed – I think they *do* count morally, but I don’t think I need that to rebut Stage Three. Stage Three implies there are no big benefits in play for the statistical people, and I deny this.
(Also, it’d be fun to discuss these issues in person sometime!)
1. Nothing problematic — it was just a terminological detail.
2. Ok, thanks for the references as always.
3. We agree on this.
4. Nevermind, I had something else in mind.
Comments are closed.