Skip to content

Smoking and yellow teeth

With at least one airline announcing this week that it was stepping up screening on Yemeni passengers, I want to return to a topic I’ve touched on before: profiling.

What interests me is ‘rational profiling’. If a security guard believes, for no good reason, that members of racial/ethnic/national group R are more likely than average to pose a danger to the public, and this assumption is false, then we have a straightforward example of prejudice. But suppose there is indeed a greater risk from members of R? It then seems rational to focus more attention on R-members.

But might increasing scrutiny on these people be unethical? Can the rational and the moral be teased apart? Can it be immoral to be rational? To resolve the issue, it would help if we could imagine two parallel cases in which the numbers were identical, but where we felt more uneasy about acting on one set of numbers than the other.

Here’s such a hypothetical example. Suppose there are 100 smokers and 100 non-smokers. Smoking causes cancer but not everyone who smokes becomes ill. Suppose half the non-smokers (50) have yellow teeth, but none of the 100 will develop cancer. Suppose, for whatever reason, that half of the smokers (50) have yellow teeth and this same 50 will get cancer (the other 50 smokers will be unharmed by their habit). Cancer involves costly medical treatment, so a health insurance company wants to calculate the risk of prospective clients developing the disease.

Now imagine the government restricts the insurance company to asking just one question. What question should they ask? Well, the questions, ‘Do you smoke?’ and ‘Do you have yellow teeth’, will be of equal use to the insurance company. Fifty of the hundred smokers and fifty of the yellow-toothed will develop cancer. So both questions are, as it were, equally rational.

But doesn’t the first seem less objectionable than the second?

Share on

11 Comment on this post

  1. Yes, it does, and I would say rationally so. It’s the smokers who, through their habit, are exposing themselves to the increased risk of cancer even if we can’t identify at the health insurance stage who’s going to be afflicted. Consequently, they should bear the cost of this choice.

    I think the analogy would work better if we imagine that people don’t have to own up to smoking—which presumably in reality they might not—and ask whether it would be acceptable to charge yellow-toothers higher premiums on the basis that half of them would be bearing a justified cost.

  2. If we assume that having yellow teeth is known to be causally connected with developing cancer in the same way that, and to the same extent as, smoking is then, no, the yellow teeth question is not more objectionable. Insurers routinely ask about risk factors that are not matters of personal choice – starting with age and sex.

  3. I agree with Andrew and disagree with Ultan. There’s probably a moral distinction between types of causal chains, as well as between causes and correlations. ‘Desert’ is likely to be relevant here. Of course actuaries/insurers will try and obtain whatever information they can. That was the entire point of the example – it may be equally rational to act/judge on either of two pieces of information, but that doesn’t make it equally moral.

  4. As you and the comments imply, the difference between rational decision-making and prejudice is embodied in our notion of rationality. In your example we see nothing but an arbitrary link between yellow teeth and cancer, and arbitrariness is opposed by rationality. Therefore the second question is objectionable.
    This may give us some guidance on the question of profiling, but I fear that it doesn’t take us very far.
    The following is very sketchy, but perhaps we could go a little further by making it a requirement that there be a burden of proof provided by the prospective profiler. Ie, that we put in place a sort of precautionary principle against profiling unless there is a clearly demonstrable and proven reason for profiling, and that the selection of those profiled should be based on scientific evidence.
    The argument for this would go along the lines that prejudice is a denial of autonomy (because its victims are not treated as autonomous individuals but as types), that autonomy is a human right, and that we should therefore only use profiling in extreme cases, and in addition should be able to demonstrate its necessity and the sound factual basis of selection.

  5. Anthony

    I don’t think all profiling can be so quickly dismissed as ‘prejudice’. As I wrote in my post, what intrigues me are cases where generalizing on the numbers is (statistically) rational.

    And I don’t know what you mean by ‘scientific evidence’. There are no doubt good stories to tell about why sex, race, smoking are statistically correlated with other things we might rationally care about.

    I’m not sure, either, what you mean by imposing a burden of proof on the profiler. It is impossible to imagine a society without profiling. A shop owner would be interested in the fact (assuming it is one) that people who come into a store between 8 and 9 am are more likely to want milk and newspapers than pasta: and might alter the lay out of the store accordingly. This is a form of rational profiling. Of course they’ll be some individuals who go to the store between 8 and 9 in the hunt for pasta: they might be mildly inconvenienced by the 8-9am lay out of the store.

    The paradigm case of profiling is acting/judging on the following information:

    Some, but not all of those individuals in Group G also have characteristic C and some but not all of those individuals outside Group G also have characteric C, where the percentage of those in Group G with characteristic C is higher than the percentage of those outside Group G who also have characteristic C.

    This description fits almost all the cases I’m interested in. Group G might be a race, or sex, or the class of all those who go shopping between 8 and 9am. Characteristic C might be the characteristic of wanting a newspaper, or of being a criminal, or of having a disease.

  6. Thanks for your reply, David
    Sorry, I was too far ahead in my argument, thinking of cases more controversial than your shopkeeper (which, it seems, meets my criteria – a clearly demonstrable and proven reason for profiling, and a selection of those profiled based on evidence).
    My step ahead was to start with the airlines’s particular screening of Yemeni nationals, and go ahead in thinking of other similar examples such as systematic police searches of young blacks, in which cases profiling comes close to prejudice, often causes more than mild inconvenience, could exacerbate counter-prejudice …
    It is with these cases in mind that I suggested that profiling could be considered as an attack on autonomy.

  7. Hi Anthony

    Thanks for your clarification. I’m still not entirely sure what you mean by ‘close to prejudice’. Straighforward cases of prejudice are those where actions/judgements are based on perceived correlations, where no such correlation exists, and where there is no good reason to believe in a correlation. Where it exists, one way to try and reduce such prejudice is through education – pointing out that a belief is not actually true.

    But suppose it really is the case that a bomb is more likely to originate from Yemen than from Iceland. It then needs more work to claim that acting on this true belief (if it is true)is a case of ‘prejudice’. Whether or not we should label it ‘prejudice’ is precisely one of the interesting questions at stake.

    We’ve mentioned that ‘desert’ might be a consideration here: we may not object to asking people whether they smoke because we think they’re to blame for their habit: if smokers are required to pay higher insurance as a result of their freely chosen lifestyle, well tough.

    But there might be other important considerations. One might be the degree of inconvenience that you mention. Another might be the size of the statistical gap between Group G and non-Group G, with regard to the characteristic we’re interested in. A third might be what’s at stake when these correlations are acted upon – national security or a packet of pasta.

  8. Perhaps a fourth might be the societal implications of the discrimination in question. A fifth might be whether, and how easily, the profiler could obtain more precise information rendering the profiling unnecessary, unless we regard this as an example of the “what’s at stake” criterion. A sixth would presumably need to be who’s doing the profiling: a public official, a private professional, or a citizen.

  9. David,
    First, thanks for your courtesy in replying.
    Secondly, what does “close to prejudice” mean?
    Let us assume that we prove that young blacks are more likely to be imprisoned for criminal offences (ie disproportionally present in prisons compared to their presence in the total population). This could be taken by a profiler to be indicative that it is justifiable to target young blacks for searches : it seems, to use your words, to be a good correlation.
    Even leaving aside the possibility that the disproportionate prison population could be directly or indirectly due to discrimination, I can imagine unjust and unjustifiable consequences from such a seemingly rational profiling decision.
    I share with you the hope that education should help surmount such indirect (or even unconscious) prejudice.

  10. Peter

    My third point was intended to encompass your fourth. Yes to your sixth. Your fifth is interesting.

    More information might bring an individual into, then out of, then back into a targeted group. If you want to know whether a person is likely to develop a particular condition/illness, you could find out whether they smoke, whether they’re physically active, whether there’s a history of the condition in the family. Predicting what will happen when you have all this precise information is still profiling, just more sophisticated profiling. And each piece of information doesn’t necessarily take you in the same direction. A prediction on whether a person will develop a condition could in theory swing backwards and forwards with each piece of information until the information is complete (and only a God could have complete information).

  11. I agree with you David, but I guess there is a presumption that the profiling becomes more reliable as more information is acquired. More information can indeed have several effects: it can reinforce the existing statistical evidence, weaken it, or reverse it. In any case it seems to me that the profiler has an obligation to take such information into account if it’s relatively easy to obtain.

Comments are closed.