Skip to content

What Fuels the Fighting: Disagreement over Facts or Values?

In a particularly eye-catching pull quote in the November issue of The Atlantic, journalist and scholar Robert Wright claims, “The world’s gravest conflicts are not over ethical principles or disputed values but over disputed facts.”[1]

The essay, called “Why We Fight – And Can We Stop?” in the print version and “Why Can’t We All Just Get Along? The Uncertain Biological Basis of Morality” in the online version, reviews new research by psychologists Joshua Greene and Paul Bloom on the biological foundations of our moral impulses. Focusing mainly on Greene’s newest book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Wright details Greene’s proposed solution to the rampant group conflict we see both domestically and internationally. Suggesting that we are evolutionarily wired to cooperate or ‘get along’ with members of groups to which we belong, Greene identifies the key cause of fighting as different groups’ “incompatible visions of what a moral society should be.”[2] And his answer is to strive for a ‘metamorality’ – a universally shared moral perspective (he suggests utilitarianism) that would create a global in-group thus facilitating cooperation.

Wright, however, thinks Greene’s only got it half right. While in-group bias – the tendency to overestimate the virtues, underestimate the vices, and inadequately scrutinize the assumptions of one’s own group – certainly leads to protracted group conflict, this bias doesn’t have much to do, Wright says, with fundamental moral disagreement between groups. Rather, he claims that the persistence of fighting is rooted in the way these cognitive biases affect our judgment of the facts of situations in which one’s in-group is pitted against an out-group. As an example, Wright cites the case of American-Muslim tensions since 9/11:

There’s no big difference over ethical principle here. Americans and ‘jihadists’ agree that if you’re attacked, retaliation is justified (an extension of the sense of justice, and a belief for which you could mount a plausible utilitarian rationale, if forced). The disagreement is over the facts of the case – whether America has launched a war on Islam. And so it is with most of the world’s gravest conflicts. The problem isn’t the lack of, as Greene puts it, a ‘moral language that members of all tribes can speak.’

On this account, it is due to the difficulty of viewing ‘the facts of the case’ impartially – in turn caused by biologically-rooted cognitive biases – that we continue to clash.

Yet Wright concedes that Greene’s emphasis on the causal role of fundamental moral disagreement “may hold more water” in explicitly value-based domestic debates about abortion or gay rights.[3] It is not clear to me, however, that his jihadist case is all that dissimilar from, for example, his abortion case. One could argue that the abortion debate in the United States is similarly fueled by a disputed ‘fact’: whether or not a fetus is a person. We can all agree that it is wrong to kill a person unjustifiably, just as Americans and jihadists can both agree that retaliation is justified if you are attacked. We just disagree about a ‘fact of the case’.

But the key point is that this disputed ‘fact’ hinges on what it means to be a ‘person,’ an inherently normative concept: its definition cannot but be value-based. Similarly, the facts of the jihadist case  – whether “America has launched a war on Islam” – depends on the inherently value-based dispute over what it means to ‘launch a war on Islam’. And these normative disputes will persist even if, as Wright recommends in the conclusion of his essay, we (somehow) achieve cognitive “bias neutralization.”[4]

Wright, then, is a bit too dismissive of the role of incompatible values in fueling fighting. His concluding suggestions for overcoming cognitive biases through self-awareness and mindfulness will certainly help in reducing ugly and violent conflict. But I can’t agree with the implication of his argument that, if only we could be completely impartial, if only we could evaluate the facts of the case without bias, there would cease to be serious moral disagreement. And masking fundamental moral disagreement as factual dispute will probably not be useful in reducing conflict.

This is not to say that Greene’s recommendation for a global metamorality of utilitarianism is the right way to go: that sounds about as unappealing as it does implausible. But he may be on the right track. We seem to make progress in the face of seemingly intractable conflict when we can agree on at least some values, when we can find at least a bit of Rawls’ ‘overlapping consensus.’ I’m reminded in this instance of an analysis by Kennedy Institute of Ethics researcher Cynthia B. Cohen on the strategy of President Bill Clinton’s National Bioethics Advisory Commission.[5] Tasked with providing a recommendation on the permissibility of embryonic stem cell research, the committee, led by Harold Shapiro, focused on the fact that nearly everyone agrees that abortion is permissible when the mother’s life is in danger. Based on this agreement, the committee argued that embryonic stem cell research ought to be funded for research on life-threatening or seriously debilitating diseases; if you think that aborting a fetus is permissible to save a life, the argument goes, you must think that the destruction of embryos that occurs in embryonic stem cell research is permissible if that research will likely save lives. Although Clinton, under political pressure, did not act on the advice, it still seems like a good example of a way forward in a country rife with incompatible moral perspectives (understatement of the year?). It also, unlike Greene’s metamorality, does not posit the need to erase fundamental moral disagreement.


[1] Robert Wright, “Why We Fight – And Can We Stop?,” The Atlantic, November 2013, 113.

[2] Joshua Greene, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Penguin, 2013, cited in Robert Wright, “Why We Fight – And Can We Stop?,” The Atlantic, November 2013, 106.

[3] Wright, 114.

[4] Ibid., 118.

[5] Cynthia B. Cohen, “Promises and Perils of Public Deliberation: Contrasting Two National Bioethics Commissions on Embryonic Stem Cell Research, Kennedy Institute of Ethics Journal 15.3 (2005): 269-288.

Share on

1 Comment on this post

  1. Very interesting post. I think you’re right that a central problem for Wright is that many alleged ‘factual’ disputes involve normatively-laden concepts. Moreover, even the notion of bias itself is generally understood normatively. If bias involves something like the influence of factors irrelevant to the case at hand, then one must have a standard about what factors are relevant. In moral cases, that standard will be a moral one. For instance, in claiming that racial bias is a moral failing, one presupposes a normative framework of moral equality among races. That’s an uncontroversial assumption, but other standards won’t be so easy – including the moral relevance of religion. So pushing for bias reduction really involves pushing one’s own moral views about what’s morally relevant, and those standards will be subject to intense disagreement.

    One solution is to rely on a more neutral, subjective conception of bias. Instead of taking bias as involving what is objectively irrelevant, take it as involving what is irrelevant according to the (potentially biased) agent’s own lights. So, combating bias essentially involves encouraging internal consistency. This can’t overcome all forms of disagreement, as between the thoroughgoing racist and egalitarian. But I suspect a lot of disagreements do come down to such potential internal conflicts (most instances of racism these days is from those who, on consideration, would accept racial moral egalitarianism). This focus won’t get you to that utopian ideal of complete conflict resolution, but it may be able to help clarify and reduce many serious conflicts.

    Something like this is going on with the overlapping consensus approach you advocate – we take cases where our internal standards are the same and make inferences/deductions from there. But that’s not going to be enough – as you point out, Shapiro was not successful in getting a national consensus (or even intra-party consensus) surrounding stem cell research. Many years later public opinion and political reality has swung back in favor of stem cell research, but it’s not at all clear that’s primarily because of overlapping consensus strategies of proponents.

    So what went wrong? Maybe part of the story is that, despite starting from similar premises, people made poor inferences and deductions. So one strategy in line with Wright’s proposal is to improve people’s inferential and deductive abilities. Most obviously, this points to a need better public education – including what might be called philosophical education. And more speculatively, we might look for certain forms of biomedical interventions to improve such cognitive performance. Even if this approach is no panacea, it has the advantage of not presupposing particular (and controversial) normative positions, perhaps dodging some of the problems of disagreement that have been raised.

Comments are closed.