Skip to content

Sam Harris, the Naturalistic Fallacy, and the Slipperiness of “Well-Being”

This post is about the main argument of Sam Harris’s new book The Moral Landscape. Harris argues that there are objective truths about what’s morally right and wrong, and that science can in principle determine what they are, all by itself. As I’ll try to demonstrate here, Harris’s argument cannot succeed. I call the argument “scientistic” because those who take (a variation of) its first two premises to be obvious are led to exaggerate the importance of scientific measurement for determining what’s morally right, and correspondingly to underestimate the importance of moral reasoning and moral philosophy.

Harris commits what philosophers call “the naturalistic fallacy”: of attempting to draw conclusions concerning what we ought to do (normative conclusions) directly from premises that are purely factual, or scientific, and value-free (purely descriptive premises). Today I will show why such a move is fallacious, and draw attention to the way that Harris’s use of the ambiguous term “well-being” masks  the fallacious move that his argument makes use of.

Here, then, is a schematic version of Harris’s argument: The Scientistic Argument

Premise 1.1) The right action is whichever action maximizes well-being.

Premise 1.2) Well-being is the balance of [conscious states C].

Premise 1.3) Scientists can measure the balance of [conscious states C].

Conclusion) Scientists can (indirectly) measure the rightness of actions.

Harris does not come down very clearly in favour of any one particular set of conscious states that he takes to constitute well-being, so I’ve left a placeholder in the argument in square brackets. You should imagine the placeholder as having been determinately filled in in whichever way Harris thinks appropriate. It has to be filled in one way or another so that scientists will know which states to measure. One traditional account of well-being that Harris seems sympathetic to in places is the classic utilitarian definition in terms of pleasures and pains: the greater well-being one has, the greater the balance of one’s pleasures over pains.

Here’s another argument; I’ll call it The Accountancy Argument:

Premise 2.1) The most economically successful business is whichever business makes most profit.

Premise 2.2) Profit is the balance of income over outgoings.

Premise 2.3) Accountants can measure income and outgoings.

Conclusion) Accountants can (indirectly) measure the economic success of businesses.

To be truly compelling, an argument needs to have both a set of undeniable premises, and a conclusion that logically follows from them. The Accountancy Argument has these features. And The Scientistic Argument seems superficially analogous to The Accountancy Argument. If it is genuinely analogous, then it too must be a compelling argument, and we will have to accept its conclusion.

Roger Crisp’s post on this blog last week points toward one important disanalogy between The Scientistic Argument and The Accountancy Argument. Premise 2.1 of the Accountancy Argument is undeniably true because the most economically successful business is by definition just the same thing as whichever business makes most profit. In contrast, premise 1.1 of the Scientistic Argument does not seem to be true by definition (though the possibility that it is will be considered later).

This need not be a fatal objection: premise 1.1 could still be defended by showing that its two concepts “right action” and “action that maximizes wellbeing” refer to the very same property (just as the concepts “heat” and “total kinetic energy of the atoms in an object” refer, scientists have discovered, to the very same property, though the terms were not defined to mean the same). But Crisp raises the worry that the property of being the right action and the property being whichever action maximizes well-being are not the very same thing: they might be, as Derek Parfit claims, “too different” to be the same thing. After all, we can’t possibly recognize that something is the right action without recognizing that there is some sense in which we ought to do it. Being the right action, as Parfit says, is a normative property. But we surely could know that some action would maximize well-being (in the sense defined by premise 1.2: maximizing the balance of [conscious states C]), and still legitimately ask whether there is any sense in which we ought to do it.  Being the action that maximizes the balance of [conscious states C] seems, unlike the property of being the right action, to be a purely descriptive property: it describes how the world is or could be, without yet telling us what we ought to do about it. It seems unlikely, then, that the two concepts in premise 1.1 could refer to the same property.

As Crisp points out, there is still a third possible way to defend premise 1.1: Even if its concepts refer to two different properties, premise 1.1 might still be true if the rightness of an action “is anchored” in, or supervenes on, its maximizing well-being (i.e. maximizing the balance of [conscious states C]). This is to say that if two actions differ as to their rightness, then they must also differ as to whether they maximize well-being, although rightness and maximizing well-being are not the very same thing. On one view of the mind, this is similar to the relationship between mental states and brain states – you can’t have a change in whether you feel tired, for example, without a corresponding change in the state of your brain – even though feeling tired is not the very same thing as having a brain state of a certain kind.

The difficulty with this move for Harris, as Crisp correctly recognizes, is that in both these cases – mental and moral – scientists are unable to measure the supervenient property directly. So the question arises: How do we know about the supervenient property, or about the supervenience relation (and hence about the truth of premise 1.1)? We only find it appealing to think that feeling tired supervenes on brain states because each of us starts from our own individual experiences of feeling tired. Scientists cannot directly measure these conscious feelings in other people; they can only measure their behaviours and brain states (for example, by using surveys, or MRI scans). If our world were populated with zombie or unconscious robot scientists and a few unscientific conscious people, we might reasonably wonder whether the scientists would have any idea that feelings of tiredness even exist, let alone know anything about their supervenience relations to the other, measurable properties.

A magnified form of the same problem arises when it comes to moral rightness. Moral realists sympathetic to The Scientistic Argument might want to claim that we can have individual “experience” of the property of rightness, just as we have individual experience of the feeling of tiredness. If this claim could be defended, perhaps we could use it to argue for premise 1.1. However, there are a couple of serious difficulties for the claim that we “experience” the property of rightness: First, we often seem to deeply  disagree about which are the right actions. This is so not just in real world cases, but in hypothetical ones where the natural facts can be agreed on by stipulation (e.g. Suppose that over their whole lifetimes, Blue would have a well-being of 10, and Red a well-being of 5, all other things are equal, and you could either give an additional 6 units of well-being to Blue or 5 to Red. Which would be right? Those who care most about equality will answer one way, those who care most about the total will answer another.) This disagreement makes our “experience” of rightness look, at best, highly unreliable. Secondly, moral realists have provided no plausible explanatory account of how it is that human beings have the ability to experience the supervening property of rightness. This makes it difficult to see how human experience of such a property could could even be possible. (My own view is that we should respond to these worries by abandoning the moral realist claim that rightness is discovered by human beings rather than constructed by them. Pace Harris, this need not mean that morals are relativistic, or just a matter of opinion, or that there are no moral truths.)

Unlike in the case of The Accountancy Argument then, we have difficulty justifying the very first premise of The Scientistic Argument. But Harris might now reply that I am just being difficult: perhaps I should accept premise 1.1 as true because it is obviously true. How could anyone possibly doubt that the right action is the action that maximizes wellbeing? To say that some action would generate more well-being than another, Harris might say, is just to say, in different words, that it would be better. And how could the right action be any other than the best one?

There are some questionable steps in this reasoning to do with aggregation, which I’ll set aside. I admit that premise 1.1 does understandably invite the kind of reading on which it can easily be seen as tautologically or obviously true, just like the first premise in the Accountancy Argument which is true by definition. This makes the Scientistic Argument seductive, but misleadingly so. The trouble here is in the slipperiness of the term “well-being”. For “well-being” is, in our ordinary language, a fundamentally normative term. Understanding “well-being” in the ordinary way, suppose there is a population of morally praiseworthy individuals in World1, in which each has a well-being of 1, and an otherwise equal World2, in which each member of the population has a well-being of 2. Then World2 is better, and (importantly) it is better by definition. If we use this ordinary, normative definition of “well-being” to understand premise 1.1 of The Scientistic Argument, then that premise may seem obviously true. But then it becomes tempting to jump to a different definition of “well-being” when we come to premise 2.1: for example, we may then be tempted to define “well-being” as “the balance of [conscious states C]”. Without defining “well-being” this way, premise 2.1 cannot be obviously true: we would, at the very least, need an argument for it. But if we do define “well-being” as “the balance of [conscious states C]” in premise 2.1, our Scientistic Argument argument then commits the fallacy of equivocation. Moreover, if Parfit is correct in his very plausible claim that normative properties and descriptive properties are just too different to be the same thing, then these two definitions of “well-being” are flat-out incompatible.

In The Accountancy Argument, the three premises could all be true by definition. But for moral realists at least, the first two premises of The Scientistic Argument cannot both be true by definition. For at least one of these premises, a difficult question has to be faced: How do we know this to be true? And science cannot answer that question. Above, I showed the difficulties that arise for defending premise 1.1 if we define “well-being” in a purely descriptive way. If we instead define “well-being” in a normative way – for example, as “the measure of that which makes a person’s life better” – then similar difficulties will arise for defending premise 1.2, as Kwame Anthony Appiah’s fine review of Harris’s book points out. The question then is: Which kinds of things increase or decrease a person’s well-being, and how much does each of them count in relation to the others? Conscious states? Knowledge? Activities? What about things that you don’t know about, or that even happen after your death, such as your book becoming famous, or your reputation being impugned? If well-being is understood as a normative property, then these questions are, precisely, questions about the nature of the supervenience relation between the moral and the natural; questions that cannot be answered by science. Yet the answers to these questions are anything but obvious: we must engage in moral reasoning, and moral philosophy, to find them.

Share on

29 Comment on this post

  1. But can they be answered even by moral philosophy? What if they are more akin to questions such as, “What shall I have for dinner today?” Then we cannot “find” the answers: we must rather choose them. What moral reasoning and moral philosophy can do, of course, is to help us check the consistency between our answers and suggest complex, consistent structures within which to make these choices, rather as nutritionists do in relation to food.

  2. Science can tell us what we ought to do *IF* we seek to maximize well-being. That is sufficient for me. You can have your “what is right”, whatever that means.

  3. Greg: How do you want to define “well-being”? If you define it as “what is good for an individual”, science can’t tell you how to maximize it. You’ll need moral reasoning to find out what things are good for an individual first. If, on the other hand, you define it as a value-free description, e.g. “the balance of pleasure over pain”, or [insert some other description of the things that you, Greg, think is good for an individual], then science may be able to tell you how to maximize it. But then you’ll have to answer the following question: why ought we to maximize [the balance of pleasure over pain / some other description of the things that Greg thinks is good for an individual]? Science can’t answer that question. And simply calling your preferred set of states “well-being” doesn’t answer it either.

  4. Simon: Thanks for replying to my comment. Yes, I define well-being as something like the balance of pleasure, joy and satisfaction over pain and suffering. The question “Why ought we maximize well-being?” is both irrelevant and incoherent. It is irrelevant, because even if no one has an interest in maximizing well-being, science can still tell us how to do it, so my claim stands. More importantly, your question is incoherent because it contains an unconditional ought and I have no idea what an unconditional ought could possibly mean. Could you explain to me what “X ought to A” means without an explicit or implicit goal? Cheers.

  5. Greg: First, if my question “Why ought we to maximize [Greg’s chosen set of conscious states]?” is irrelevant, then your first comment is all the more irrelevant to my post, since my objection was to Harris’s mistaken claim that science alone can determine what is morally right and wrong. I offered no objection to the claim that science can measure conscious states.
    Second, if it will help you understand it, then you may understand my question as having an implicit goal in it: that of being moral.

  6. Peter: Thanks for your question. I think it is worth asking, in part because sometimes people think the kind of subjectivist answer you suggest is the only alternative to either full-blooded moral realism or a religious foundation for moral truths. I think moral questions are not much like the question, “What shall I have for dinner today?” though they are similar in that they are, in some sense, questions about what to do. I think they may be much more like the question, “What is a good chess opening?” which has a lot less to do with individual preference, but still is a question about a human practice. My critique of Harris does not depend on this broader view though.

  7. I think you’re setting up a bit of a straw man by representing “well-being” as “[conscious states C]”.

    What about *physical* states? What about states that *can* be objectively measured?

    Starvation can be measured objectively.

    Mortality can be measured objectively.

    Number of birth defects, years of schooling, number of bias crimes…. all of these can be measured objectively.

    By focusing purely on internal gestalt, you may be betraying your own biases — as someone who suffers from few if any of these external symptoms of deficient well-being.

    I’m a Rortean pragmatist at heart, if that’s relevant, and I think that a great deal of the perceived “mushiness” of the term “well-being” can be remedied easily enough by filling in sets of data that can be accepted more or less universally.

    Infant mortality: anyone in favor? No? Didn’t think so.

  8. Palexanderbalogh: I’m not “setting up a … straw man” by characterizing well-being as “[conscious states C]”, I’m responding to what Sam Harris says himself in various places, e.g., “morality can be linked directly to facts about the happiness and suffering of conscious creatures” (p.64).

    But in any case, you miss the point: The Scientistic Argument is no less fallacious if you replace “[conscious states C]” with “[physical states P]”, or, if you like, “[conscious states C and physical states P]”. I did not object that science cannot measure [conscious states C] objectively. I objected that science cannot tell us that [conscious states C] (or your preferred alternative) are what *ought to be maximized*.
    Sure, we can all agree that starvation is bad, infant mortality is bad, birth defects are bad, and years of schooling are good. But *science* cannot tell us any of these things. And *even if* these claims are obvious, you haven’t yet done the hard work you need to do. To make decisions, we will need to determine the truth values of moral claims that are far from obvious: e.g. if we can spend our limited funds to reduce the risk of catastrophic famine in our society from once every 500 years to once every 1,000 years; or to increase years of schooling from 5 to 6; or reduce infant mortality from 7 per 1,000 to 6 per 1,00; or reduce the incidence of blindness at birth from 12 per 1,000 to 8 per 1,000 – which should we choose? These are difficult *moral* questions; they concern how we ought to handle competing values and priorities. Science can – and should – inform the answers to them, and nobody sane denies this! But it cannot give us the answers on its own.

  9. Simon: see my latest reply to Roger Crisp on “science and morality”. I am still having difficulties being convinced that the kind of alternative (to subjectivism, realism or religion-based morals) that you suggest actually exists in a well-defined way.

    To respond to the chess example, this differs from the dinner question only to the extent that it is assumed that the objective is to win the game, in which case the criterion for “good” becomes an essentially scientific (empirical) one: which opening is most likely to win the game? A better example might be, “What constitutes a good painting?” But here, as with morality, I would tend to a subjectivist position.

  10. Simon, sane people often deny any link between well-being and morality. Sane people also deny that science has anything to tell us about value. It is overcoming these misconceptions that is at heart of Harris’ book. That you consider these points trivial is perhaps why you have unrealistic expectations of Harris and fault him for failing to accomplish something he never set out to do. Can you cite the passages that gave you the impression that Harris intends to show that “science can give us the answers on its own”?

    In your response to me, you define the “right” action to be what you ought to do in order to “be moral”. From what I understand of your criticism, you see a gap between what it is to “be moral” and what it is to “maximize well-being”. If morality is simply a label, a free variable, then we can bridge that gap just by definition: an action is defined to be moral if it maximizes well-being. But it seems that you’re not going to let us get away with that. Instead, you seem to see the “moral” label as already being grounded and having some kind of content that is at least potentially incompatible with maximizing well-being. So what is that existing content attached to the “moral” label? It seems that you can only define it circularly: We ought to do A if A is moral; and A is moral if we ought to do A. Is there something with substance preventing us from seeing “acting morally” and “seeking to maximize well-being” as two different things?

    In your response to Palexanderbalogh, you break down your criticisms into two major points: 1) science cannot tell us that, for instance, years of schooling are good; and 2) science cannot tell us how to choose between alternatives such as increasing school funding or decreasing infant mortality.

    To (1) I think Harris would disagree and say that there is nothing in principle preventing science from predicting the effect of years of schooling in terms of brain states and placing those brain states on the spectrum from The Good Life to The Bad Life, which we agree upon by definition.

    To (2) I think Harris would agree that there may not be a single solution to that problem, so science may not be able to solve the dilemma even in principle. However, that is not to say that science can answer no moral dilemmas. If every aspect of the well-being of all sentient beings in the present and future are reduced by action A more than action B, there is just no way to defend the claim that A is the morally right action.

  11. Greg:
    1) If you’re right, Harris is clearly attacking a straw man. Citations please, of sane persons who are not moral sceptics and who claim that science is of *no* relevance to informing our answers to any questions of value. It hardly takes a book to refute this.
    2) The subtitle of Harris’s book is “How science can determine human values”
    3) I take it you meant to ask the question: ‘Is there something with substance preventing us from seeing “acting morally” and “seeking to maximize well-being” as the same thing?’ Yes, if you also want to define well-being in descriptive terms there is: “acting morally” is a normative concept. Please refer to my post.
    4) What is this “brain states … spectrum from the Good Life to the Bad Life, which we agree on by definition”? I haven’t come across it!
    5) If there is no way to defend a moral claim in the light of some scientific evidence, that does not mean that *science* has, on its own, proven it false. (Once again, I have no beef with the claim that science may have *helped* us prove it false.)

  12. Thanks for your reply, Simon. Let me focus on (3), since I think that it is my key point, and the main point of your article. My position is that without a specified goal, normative statements are incoherent. The only way to make sense of a statement like “X ought to A” is to condition it upon X attaining some goal. In a previous reply, you said that the implicit goal of moral statements is “that of being moral”. Thus, moral statements are of the form: “X ought to A if X is to be moral”. Once the implicit goal is made explicit, we bridge the is-ought gap, and we are no longer comparing apples and oranges. “X is a morally good guy” and “X maximizes well-being” are both positive descriptive statements. Now, what’s wrong with defining “a morally good guy” as “a person who maximizes well-being”?

    You seem to have the following options: a) provide a coherent explanation of what an unconditional normative statement means; b) choose a different implicit goal in moral normative statements other than “being moral”; or c) find some other substantive reason why “moral” is not a free variable label to which we can assign the meaning “maximizes well-being”. Cheers.

  13. Greg: “X is a morally good person” is not a descriptive statement (this point can be obscured by the fact that the statement contains the word “is” and the descriptive-normative gap is often referred to as the “is-ought gap”. It is on the “ought” side of that gap). As I wrote in the post, you can define “well-being” in either a purely descriptive or a normative way (but not both; see the post). You’ve indicated a number of times that you want to define it in a descriptive way: “X maximizes well-being” is then to be defined as, e.g. “X maximizes [conscious states C & physical states P]”.

    Now if you want to define the words “is morally good” to mean “maximizes well-being”, by which we can infer (from transitivity of definition) that you want to define it as “maximizes [conscious states C & physical states P]”, you are free to do so. Similarly, if you want to define the words “is morally good” to mean “is a pair of big, red clown shoes”, you are free to do so. You will then be able to use science to directly reach various truths about what “is morally good” in your sense.

    Unfortunately, since in either case you will have defined “is morally good” in a non-normative, purely descriptive way, you clearly won’t be speaking English any more, and you won’t mean the same as the rest of us mean when we say things like, “Doing A is morally good”, or, “She is morally good”.

  14. Simon, thank you for your close attention to my comments and thoughtful responses. You have sufficiently clarified the gap you speak of for me to recognize it. Your article presents this gap as if it is a problem that Harris had not even considered. However, he addresses it explicitly in his book, (pg. 39):

    “We simply must stand somewhere. I am arguing that, in the moral sphere, it is safe to begin with the premise that it is good to avoid behaving in such a way as to produce the worst possible misery for everyone. I am not claiming that most of us personally care about the experience of all conscious beings; I am saying that a universe in which all conscious beings suffer the worst possible misery is worse than a universe in which they experience well-being. This is all we need to speak about ‘moral truth’ in the context of science.”

    So it seems unfair that you characterize Harris’ position as stating that science can answer moral questions “on its own”. Harris acknowledges the need for a premise that cannot be defended by science. He defends this premise by claiming that without it or something like it, “good” and “bad” are meaningless.

    Now, you may not like the way he addresses the issue you raise, but that’s different than him not addressing the issue at all. I’d be interested to see you follow up with an article that attacks his approach head-on. This will be my last comment on this article. All the best.

  15. Simon: you didn’t respond to my last comment, but I have been following with interest your exchange with Greg.

    I take your point that “X is a morally good person” is normative, while “X maximises well-being” is descriptive, so they can’t be the same thing even by definition. On the other hand, it seems to me that you can, as a normative statement, say “X is a morally good person exactly to the extent that X maximises well-being”.

    I don’t quite agree with Greg’s statement that normative statements are incoherent without specified goals. I’m not sure how he’s using the word “incoherent” here, but in any case this position seems to rule out of court any statement that is not either a tautology or of a scientific nature, including the one I propose above. However, just because this or that normative statement is “coherent” doesn’t make it true or correct, and in this context I hold to my subjectivist position. If I say that X is a morally good person exactly to the extent that X maximises well-being, this is a statement about my own ideas about what “ought” to be. It is not a statement about the world.

  16. Hi, I’m new to this blog, and I’d like to join in this interesting discussion.

    I haven’t read the book, but I read Harris’s online articles a few months ago, following his TED talk, and I thought they were misguided. He seemed to pay insufficient attention to the meaning of his language, resulting in ambiguity and fallacies of equivocation.

    The statement which is labelled “Premise 1.1” above seems ambiguous to me. It can be taken as a substantive claim about which kind of actions are morally right. Or it can be taken as a definition of the meaning of “morally right”. In his online articles, Harris mostly seemed to treat this as a substantive claim. But on one occasion he referred to it as a definition. Can anyone tell me if he is any clearer on this point in his book?

    A lot of “moral naturalists” mistakenly take utilitarian statements of this sort to be definitions. In treating these as definitions they are putting themselves in the position that Simon has described: they’re no longer speaking English as we know it.

    Greg wrote: “My position is that without a specified goal, normative statements are incoherent.”

    I would put it differently, because I don’t consider purely goal-based statements to be normative. Consider the statement, “if you want to catch the 5 o’clock train, you ought to leave now”. I consider this equivalent to the factual descriptive statement: “leaving now is the course of action most likely to result in you catching the 5 o’clock train”. What I prefer to say is that there are no normative facts. I’m a normative (as well as moral) anti-realist.

    The error Harris makes in his is-ought argument is that he fails to distinguish between normative (or moral) oughts and descriptive (or non-moral) oughts. I ought (non-morally) to avoid the condition of the worst possible misery for all, because that would not be conducive to my goals. And presumably the same is true for everyone else. But it doesn’t follow that I have a moral obligation (morally ought) to avoid such a condition.

    I would add, however, that specific statements can have both descriptive and normative meaning. “You ought to give money to Oxfam” could be partly descriptive (that’s the way to achieve your goals) and partly normative (prescribing the action regardless of whether it achieves the agent’s goals).

    The same applies to “well-being”. It has both descriptive and normative meaning at the same time. In other words, it’s a value-laden term, but is not purely a matter of value. It has sufficient descriptive meaning that we can say some outcomes involve more well-being than others. The greatest possible misery is clearly not a state of well-being. But there are many cases where there is no fact of the matter. It might be fair to say that longer life and good health are both conducive to well-being, but there is no fact of the matter as to their relative contribution. Which involves more well-being, a shorter healthier life or a longer less healthy life? That’s a matter of personal preference, i.e. what you value.

    From my point of view, the issue of “scientism”, or whether science (specifically) can answer moral questions, is a red herring. As far as I’m concerned, rational empirical reasoning is the only way we can learn anything about the world. Science is part of the broader continuum of empirical reasoning, which also includes history and philosophy. Some questions–like those usually addressed by historians–don’t lend themselves to the specific rigorous methods that we associate with science, like controlled experiments. But the boundary between science and other empirical reasoning is not a fundamental one, and is poorly defined. The real issue here is whether there can be any moral facts at all, not which specific empirical methods can be used to discover them.

  17. Greg: Let me first thank you for your careful attention and gracious replies; these are rare qualities indeed on the internet, but they are certainly (and objectively, I would say) virtuous!
    You think I’ve represented Harris unfairly by suggesting that his view is that science can answer moral questions on its own, and you express your interest in my following up with “an article that attacks his approach head-on.” I would be happy to oblige, but quite honestly I don’t see how to take a more “head-on” approach to Harris than I have already. This is because he is a moving target: he says different things in different places, and even when he does not contradict himself outright, crucial aspects of his positions are left hopelessly unclear. One demonstration of this is in his slippery usage of the term “well-being”, another demonstration is in your conviction (which your citation seems to support) that Harris means to argue that science can answer moral questions only once a moral premise or premises is granted, whereas I have the contrary impression from other places.
    My impression that Harris attempts an immodest and fallacious argument, by the way, is confirmed not only by the the book’s subtitle, but also by Harris’s claim to have bridged the is-ought gap and avoided the “naturalistic fallacy” in the section on Facts and Values in ch. 1, where he says: “If we define ‘good’ as that which supports well-being … the regress initiated by Moore’s ‘open question argument’ really does stop.” (12) (I used a short version of Moore’s open question argument in my post to show that normative and descriptive properties may be ‘too different’ to be the same thing.) My impression is further confirmed by quotes like these: “morality should be considered an undeveloped branch of science” (4), and, “My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within the reach of the maturing sciences of mind.” (28)
    Let me briefly address your suggestion that Harris in fact accepts that we need one or more moral premises in place before science can help us to answer moral questions. If this is really his view, then his argument that science can help us answer moral questions is really a very modest one. Here is an example of a moral question: “Ought we to cut down on our use of fossil fuels?” Who would not already accept that science is of relevance to how we should answer it? This is why I say that Harris is just making a straw man argument if your interpretation is correct. Perhaps the book would still have been interesting if he had provided a significant and novel argument for his (basically utilitarian) moral premise, or some novel replies to the objections to it (the objections are very standard and well-known, and some of them are very serious). Disappointingly, though, Harris does none of these things (I note that there’s a whole section in the book entitled “moral paradoxes” which is really a list of some standard objections to his preferred kind of view, and they go largely unanswered!)
    By the way, if you have further suggestions for a future post at Practical Ethics but don’t want to comment here in the thread, you can email me at: myfirstname.mylastname@philosophy.ox.ac.uk

  18. Peter: Sorry for not responding to all your comments; I just want to keep my contributions to the narrower topic of the original post. On that topic, certainly nothing I’ve said rules out the view that, “you can, as a normative statement, say ‘X is a morally good person exactly to the extent that X maximises well-being’ ” (even if “well-being” is defined in descriptive terms). But, as I discussed in the post, that sort of claim demands an argument drawn from moral reasoning or moral philosophy, not from science.

  19. Hi Richard, I can confirm that Harris is no clearer in the book about the status of premise 1.1; sometimes he talks about defining “good” in terms of well-being, at other times he says that what is good is “determined” by well-being. Well spotted.
    Unlike you, I believe that there are normative truths and that they are not a matter of subjective preferences, but I want to avoid that topic here. Let me just remark on something you said about “well-being”. You wrote, “[“well-being”] is a value-laden term, but is not purely a matter of value. It has sufficient descriptive meaning that we can say some outcomes involve more well-being than others.” Suppose that Harris follows you in that claim. Then it is worth noting that, on either your view or his, we could not properly say that “some outcomes involve more well-being than others” purely on the basis of scientific investigation. On your view, because the meaning of “welll-being” refers in part to your subjective preferences, you can’t say an outcome would involve “well-being” unless you subjectively prefer it. On Harris’s view, because the meaning of “well-being” refers in part to normative truths, you can’t say an outcome would involve “well-being” unless the normative truths support pursuing it.

  20. Hi Simon. Thanks for your helpful reply. Sorry I’ve taken so long to get back to you. I’ve been busy.

    Regarding well-being, first let me correct an error I made. I wrote, “That’s a matter of personal preference, i.e. what you value.” In writing that, I conflated two possibilities as to who “you” could refer to. I failed to distinguish between the values of the speaker (of a statement about well-being) and the values of the subject (whose well-being is under consideration). Insofar as the well-being of a subject is taken to be a function of his preferences (or values), there is a fact (at least an approximate one) as to the correct value of that function. There is a fact about what his preferences are and a fact about how well his condition fits his preferences. So the values relevant to the question of value-ladenness are the values of the speaker, not of the subject.

    Now, to respond to your argument. I would reject your conclusion on the grounds that there’s a limit to how far the descriptive meaning of the word can be stretched. If someone sincerely claims that the right criterion for measuring well-being is the level of misery (so the worst possible misery is the highest state of well-being), I wouldn’t just think he had peculiar values. I would think he’d misunderstood the meaning of “well-being”. For someone to be simultaneously in a state of maximum misery and maximum well-being seems like a contradiction in terms.

    This is different from moral right. Suppose someone says, “action X is morally right.” No matter how outrageous his X was (even inflicting the worst possible misery on someone), I wouldn’t consider that a contradiction in terms.

  21. Hi Richard, I’m afraid I find the first paragraph of your reply rather confusing. Did you accidentally transpose the words “speaker” and “subject” in the last sentence? If so, then I understand what you mean when you talk about the descriptive component of the meaning of “well-being”. Good, now we have a (descriptive) necessary condition that some state must meet if it counts as “well-being”. It has to fulfill the subject’s own preferences to some degree. This condition can be measured scientifically. Let’s suppose we can all agree on this.

    However, going back to your first comment, I’m not sure what you mean by the “normative meaning” of “well-being”. You say that there are “no normative facts”, and that the statement “You ought to give to Oxfam” prescribes giving to Oxfam. This is what is known as a non-cognitivst, prescriptivist view about “ought”. Now, do you think that when you say “X would promote well-being”, you are similarly prescribing X (is that what you mean by saying that the meaning of “well-being” is partly normative?) If so, then when you say “X would promote well-being”, you are not simply stating a scientific fact. You are, rather, prescribing an action that meets the descriptive condition set out above. Since science can’t tell you what to prescribe or not, it would seem that science can never on its own justify the statement “X would promote well-being”.

  22. Thanks Dennis, that’s a good review. I hadn’t realised that Harris doesn’t mention Nozick’s Experience Machine at all, not even burying it in a footnote somewhere. When the theory you defend is that the only things of value are conscious states, failing to mention that objection can only be either negligent or deceptive. It’s not as though it’s too technical and difficult for a general audience. (Interested readers will find a brief account of The Experience Machine on Wikipedia.)

  23. Hi Simon. First let me say that I’m not a philosopher, so may not be using philosophical terms correctly, in which case I apologise.

    No, I didn’t transpose the words “speaker” and “subject”. But perhaps it would have been clearer if I’d said the following in place of that sentence. A statement (about well-being or anything else) is value-laden to the extent that it’s expressing the _speaker’s_ values.

    >Good, now we have a (descriptive) necessary condition that some state must meet if it counts as “well-being”. It has to fulfill the subject’s own preferences to some degree. This condition can be measured scientifically. Let’s suppose we can all agree on this.< I think it's better to think in terms of a criterion (or mathematical function) for evaluating well-being, rather than just a necessary condition, because well-being is not just a binary value (present or not). And the criterion you've mentioned is not the only possible one. Even if we accept that fulfilling the subject's preferences must play some part, it needn't be the whole criterion. Another criterion might also take into account the subject's health, regardless of whether the subject prefers to be healthy. People often have quite self-destructive preferences, and we might not want to take the fulfilling of those as contributing as much to well-being. So my evaluation of well-being will still depend on my choice of criterion. And I'm saying that this choice depends to some degree on my (the speaker's) values. When we make a typical judgement of someone's well-being we probably don't have any clear criterion in mind. We are probably influenced by such obvious factors as health and happiness, but without any particular formula for combining them. We probably aren't influenced much by reasoning (even subconscious reasoning) about preference fulfillment. Well-being is a very vague term, and we are trying to impose a precise criterion on it. We shouldn't expect there to be one correct criterion. I'm not saying that my values are the only reason why I might prefer one criterion over another. I said there's a limit to how far we can stretch the meaning of "well-being". But long before we get to the breaking point, we may feel that some criteria are truer to the meaning of the word than others. Also, there may be criteria between which I have no preference, or where my preference is only a matter of the relative ease of evaluation. It could be that the choice of criterion is not very significant when it comes to judgements of individual well-being, and that we'd get much the same result regardless of that choice. From the point of view of Sam Harris's arguments, I think this is the least serious objection. Proceeding to your second paragraph... I consider myself a moral error theorist, not a non-cognitivist, because I think moral statements don't only express non-cognitive attitudes. I think speakers of moral statements are also normally attempting to describe a moral reality, but they fail because there is no moral reality to describe. So moral statements can be said to have descriptive meaning, but their descriptions cannot be true. I think the statement "X would promote well-being" has truth-apt descriptive meaning. In a given case, X might promote well-being regardless of which criterion you use for evaluating well-being (as long as that criterion doesn't stretch the meaning too far). But in other cases, there is no fact of the matter as to whether it's true, because its truth depends on the criterion used. In practise I would be inclined to call such a statement true if it was true by every criterion that any of my listeners was likely to choose. The statement also has a value element insofar as it depends on the speaker's values with regard to well-being (as I've discussed). It may also have a prescriptive element, depending on the speaker's motivation for saying it, but I would consder this a kind of optional extra, not a standard meaning of those words. I've tended to use the word "normative" to describe both those elements.

  24. Hi Richard, I’ll just clarify one thing: My objection to Harris in the post is not that he can’t provide an objective, precise, scientific defintion of “well-being”. It is that *if* he provides one, then science can measure it but can’t possibly tell us that it is right to maximize it. The same goes for a vague scientific defintion of well-being, although a vague definition would make measurement of well-being rather more difficult. We need to do a good deal of moral reasoning to discover the link between well-being, defined in such a way, and what it is right to do/maximize (that’s where Nozick’s Experience Machine, among other things, comes in). So it’s not true that “science can determine human values”, as Harris claims.

    This problem doesn’t go away if you say Harris is not providing a “definition” of well-being in scientific terms, but a “criterion” of it (i.e. a substantive claim about what it is). This is because it could not have been science that told us what is the criterion of well-being (defined as something it is right to maximize). Moreover, there is little agreement in this field, and there are a vast range of plausible but distinct views about what human and animal well-being consists in (e.g. hedonistic, preference satisfaction, and a range of “objective list” theories). Harris writes as if there is no significant disagreement about such matters, and as if there are no serious and well-known objections to the vague but still questionable ideas he presents himself.

  25. Simon,

    I’ve continued to follow this discussion with a lot of interest. As far as I can see we are all basically in agreement that Harris is wrong to assert, assuming this is what he does, that science alone can tell us what is morally right and wrong. By contrast there seems to be considerable *disagreement* with your assertion that moral reasoning and/or moral philosophy, unlike science, can.
    You have pointed out more than once that you prefer to avoid the latter topic here, and that’s fair enough. Nevertheless I would be interested in reading a summary of your position on this issue, either on this blog or elsewhere. As you said in your initial reply to my first comment, the question is worth asking…and if it is worth asking, it is presumably also worth answering.

    As far as I can work out my own position on this issue seems to be closely, perhaps even exactly, aligned with that of Richard. In particular I agree with him that normative statements don’t have to be explicitly goal-oriented to be coherent (as Greg asserts). I just don’t see how moral reasoning or “philosophy” alone can provide an irrefutable justification for them either. Richard put it very well when he wrote: “speakers of moral statements are also normally attempting to describe a moral reality, but they fail because there is no moral reality to describe”. I will just add one caveat: they do describe a reality, but the reality they describe (sometimes imperfectly of course) is nothing more or less than the speaker’s own values.

  26. Hi Simon. My point about “least serious objection” wasn’t in any way aimed at you. Sorry that was unclear. Perhaps I should also clarify that none of my posts here have been aimed at your critique of Harris, except insofar as I didn’t like your label “scientistic argument”, as I don’t think scientism is relevant here. I’ve just been trying to give my own analysis of the term “well-being”, partly in response to your objections and questions.

    To clarify another thing, I haven’t been using “criterion for well-being” to refer necessarily to either a definition of well-being or to a substantive claim about what constitutes well-being. I’ve just been using it to refer to any formula someone might use to evaluate something that they would call “well-being”.

    I’d also add that using a vague term like “well-being” helps Harris to avoid seeing his fallacies of equivocation. He can start by claiming that morality must have something to do with well-being, because everyone cares about well-being. And he then gradually fudges this into the claim that facts about what maximises the sum of human well-being are objective moral facts.

  27. Hi Peter. Thanks for the compliment. I differ with you on a couple of points, or at least on the way you express them.

    –In particular I agree with him that normative statements don’t have to be explicitly goal-oriented to be coherent (as Greg asserts).–

    That’s not how I would put it. I would say that _purely_ normative statements cannot be truth-apt (or possibly that they’re incoherent). But many statements have a mixture of descriptive and normative meaning. Such statements can only be truth-apt with regard to their descriptive meaning, not their normative meaning, and I’m reluctant to refer to those mixed statements simply as “normative statements” in case of confusion.

    –I will just add one caveat: they do describe a reality, but the reality they describe (sometimes imperfectly of course) is nothing more or less than the speaker’s own values.–

    I would say a moral statement doesn’t _describe_ the speaker’s values; it _expresses_ those values. If it described the speaker’s values, then there could be a fact of the matter as to whether that description was correct, and therefore a fact of the matter as to whether the statement was true.

  28. Thanks for these clarification Richard. On the first I think I misinterpreted your statement: “I don’t consider purely goal-based statements to be normative.” So the issue you were taking with Greg was on whether goal-based statements are normative at all (as he implied), not whether “other” normative statements are coherent or not. My own position would be that they *are* coherent, since I understand this term differently from the term “truth-apt”: I don’t think a statement has to be truth-apt to be coherent.

    On your second point I agree, that’s a better way of putting it.

Comments are closed.