Skip to content

The Rationalist Prejudice

Professional ethicists seem to love controversy. I myself have been too boring in this regard, but many of my colleagues have provoked heated debate. This often spills out of the safety of academia unto society at large, as many of the past entries in the Practical Ethics blog testify to. And professional ethicists rarely regret sparking off controversy, for this in many of their view amounts to inviting more people to think and that cannot be a bad thing. Behind this is an implicit, and rationalist assumption that subjecting generally accepted – and thus hitherto uncontroversial – norms and practices under critical scrutiny is always a good thing to do. They believe that public debate over an ethical problem is likely to generate a wide range of ideas which may eventually lead to a solution; and that to make people think harder and talk openly about ethical issues has intrinsic value. It is part of the ethicist’s job, then, to be controversial. Indeed, it is what practical ethics is really about in some people’s opinion.

Is the rationalist assumption sound, though?

Certainly, there is something to be said for it. After all, many of what may reasonably be described as the achievements of human moral progress could not have happened unless somebody took the task, and often the burden, of challenging and critically scrutinising traditionally held beliefs. Slavery and gender inequality used to be taken for granted; interracial marriage used to be considered morally repulsive and was illegal in some parts of the world. Those and other past prejudices are fortunately gone. Of course, rational scrutiny by itself has never been and will never be enough to bring about significant moral and political change. It must be complemented by campaigning, pamphleteering, bargaining, compromise, agitation, mobilisation and sometimes even violence. But rational scrutiny is vitally important because good reason must be shown to promote a cause. Otherwise, indoctrination will replace persuasion, might will make right.

However, pace my over-rationalist colleagues, this does not mean that rational scrutiny is always a good thing to do. For one thing, there are many questions that do not deserve serious consideration. For example, ethicists do not need to ponder – at least for now – whether literally going back in time by time machine is a solution to historical injustice. In addition, there are some ethical issues that have been settled and settled for good. We do not need to seriously consider whether slavery should be restored, or whether a certain category of people may be massacred because they are of a ‘wrong’ kind. A society where a public debate occurs over those issues is worse than a society where it does not. If so, provoking controversy on settled issues can amount to doing damage to the society we live in. Of course, what issues have been settled and what have not is highly contestable; one should indeed raise a voice of dissent if one has good reason to do so, even if the voice is likely to upset the fabric of society. Yet one should not forget that trying to put what seems like a long-settled ethical issue back on the agenda often comes with a significant price to pay as well as potential benefits to gain.

That said, the most important objection to the rationalist assumption seems to me to lie elsewhere; it is about opportunity cost. Neither professional ethicists nor the public can afford to discuss everything with equal seriousness. Rational scrutiny costs. While we consider X, we cannot consider Y. This week’s op-ed has to focus on this issue, not others. If so, we must judge which issues matter more, which less. In our world where resources are limited, we cannot afford to critically examine everything. This is especially true in academia, where zero-sum competition for resources inevitably occurs between different branches of an institution. If a grant is given to ethics, it was not given to other potentially useful subjects such as pharmacology and social policy. Utility is not everything, but it requires due consideration.

If what I’ve said is right, then the rationalist assumption turns out to be a prejudice – and a potentially harmful one. By endorsing the rationalist prejudice, one may be taking our attention away from what really matters and doing damage to our society.

Unfortunately, professional ethicists in this age of growing academic competition and ‘impact factor’ measurement are in a way structurally incentivised to badly judge what matters. We are pressured to show that we are doing something – that our papers are cited, our ideas discussed, our output ‘making a difference’. This is a legitimate and even admirable goal to pursue, but it can work perversely today because one lazy way of numerically increasing the ‘impact’ of research is to scandalise. Defend a ridiculous ethical position you do not even believe in, and you may be a ‘high impact’ ethicist! In the long run, then, we need a better way of assessing the significance of research in ethics to reduce the incentives to scandalise and to vulgarise the discipline. A word of caution is in order in the meantime: we should resist the rationalist prejudice or we will do disservice to what we care about.

Share on

16 Comment on this post

  1. I would not say that we are freed from slavery.

    I attended a conference at Stanford last week.
    Stopped in at a McDonald’s in Menlo Park and all the workers were Hispanic and all the customers, save me, were Hispanic.
    It was 6:30 in the Morning – where were all the Rich Folk ?
    The only people that were up were the workers who had the sole duty of making the lives of the rich – easier.
    Slavery is alive and well – it is just economic slavery – far more efficient than physical slavery –
    you pay your workers the minimum and fire them if they don’t show up or get sick or complain.

    But I agree, many articles in Ethics and in Philosophy and in much of the Humanities need not to have been written
    and do take away from more important questions.

    1. Thanks, Geoge (if I may). You’re right in observing that severe inequality has not ceased to exist and it often results in what may be called ‘economic slavery’. What we should do with such inequality is indeed an extremely important question. This, however, is not the same as the question I mentioned in the blog post, i.e. whether we should restore what you called ‘physical’ slavery. This issue, I believe, has been settled.

  2. Hello Kei,

    I’m in agreement with almost everything you say here. But I think it might be worth disentangling three different reasons to argue against commonly accepted views, all of which are discussed in your piece:

    (1) The general principle that it is always a good idea to challenge commonly accepted views.
    (2) Specific, reasoned objections to some particular commonly accepted view.
    (3) A desire for controversy (and so ‘impact’) acheived by insincerely arguing against commonly accepted views.

    It seems to me that only (1) directly fits what you’ve called “the rationalist prejudice”. That is, only (1) involves an assumption that it is always and everywhere desirable to confront moral common sense with sharp rational critique. It isn’t a new phenomenon; Socrates seems to have thought he was doing something of this sort.

    Probably (1) is related to the others – if one accepts (1), then it provides cover for engaging in (2) or (3). That is, if criticizing commonly accepted views is always and everywhere desirable, then clearly it is desirable to criticize some particular view when one sincerely objects to it — and it might even be desirable to insincerely criticize views, if this brings about the aim of getting those views criticized.

    On the other hand, we could reject (1), the “rationalist prejudice”, and nevertheless affirm (2) or (3) on independent grounds. You yourself appear to endorse (2) in at least some circumstances: “one should indeed raise a voice of dissent if one has good reason to do so, even if the voice is likely to upset the fabric of society”. And we can imagine instrumental justifications for (3): if the only way to get attention and funding for genuinely worthwhile projects is to regularly also produce flashily controversial arguments, then perhaps we ought do so. (This argument, of course, assumes empirical claims contrary to yours. It assumes that controversy increases the total audience and funding available to ethicists, rather than merely drawing un-increased resources only to unworthily insincere controversy. I don’t know who is correct about the empirical claims.)

    None of what I’m saying is in disagreement with you. But it does seem like there are some meaningful logical differences in these three motives for controversy. Most importantly, I’m not sure that your division of resources argument tells against (1) or (2), at least not in the way it does against (3). If one sincerely believes that all commonly held views, or at least certain ones, ought to be put to rational challenge, then it will not be a bad thing if funding and attention are drawn to those sites of controversy.

    1. Thanks, Regina. You’re right in pointing out that (1), (2) and (3) are logically distinct issues, that the ‘rationalist prejudice’ as I defined it refers to (1), and that I have no general objection to (2).

      My opportunity cost (‘division of resources’) argument is meant to speak directly against (1). I can’t see why it does not. If resources are limited, and if rational scrutiny costs, and if the kind of resources (e.g. money and time) that we use for rational scrutiny can be used for other useful purposes also, then we should not allocate all the resources we have to rational scrutiny. To say that rational scrutiny is *always* a good thing to do is to deny that.

      As for (3), the problem I’m concerned with is not exactly insincerity as such. It’s rather about the institutionalised academic culture that gives professional ethicists at least some incentives to be insincere, to be overly provocative, etc.

      I don’t believe I can do justice here to all the points you kindly raised. There’s much for me to think about!

  3. It could also be the case that on average commonly accepted views are not challenged at the optimal level. At least in some societies unquestioning belief has been the norm, and the result was that injustices and inefficiencies persisted far longer. Most people do not have the high need for cognition evident in philosophers, and seem to be quite content to assume “things are as they are”. The spread of inquiry and systems for managing criticism constructively has been an important part of the rise of the West since the enlightenment. So challenging the status quo, even when badly focused, might be beneficial.

    This does not contradict Kei’s argument: we should still aim our rationalism at the right targets, and we might be overdoing it compared to the optimum level of challenging. But I guess that is partially an empirical issue: are critical minds *actually* undermining our cultures more than they help refresh them? How can we measure or estimate it?

    1. Thanks, Anders. I like your image of reason aiming at a ‘right target’. The image I often use is ‘self-censoring’ reason, constantly checking its own boundaries to avoid trespassing.

      As you are probably aware, I do not hold that rational scrutiny generally does more harm than good. But I think it can do harm when reason forgets to censor itself, when reason tries to be unreasonably rational.

    2. Well… maybe… but it’s hard to say what the “right” target is. You don’t need to be a thorough-going relativist to think reasonable minds may differ about the right target. Even if we agree about what issues are important we usually disagree about how to solve them. Economically literate people usually agree that creating a positive-sum game via economic growth is better than creating a zero- or negative sum game through stagnation or recession but there are massive disagreements about how best to ensure growth, even among the highly literate.

      I’m not sure how you’d institutionalise any prioritisation strategy. Explicit gatekeepers are an obviously dodgy idea. I guess research councils very weakly do this anyway, by funding some research and not other bits. So do universities, over the long-run, via their successive hiring decisions. Though we know there are issues with those, too (eg Jonathan Haidt’s stuff…).
      As a former Treasury official my inclination would be “stick a price on it since this gets people to reveal a preference for which issues they’re actually prepared to put money down for. So let’s treble or quadruple publication fees for ethics journals.*

      *As Kei says, my incentives here are to “Defend a ridiculous ethical position you do not even believe in, and you may be a ‘high impact’ ethicist!”

  4. Very interesting post!

    “Defend a ridiculous ethical position you do not even believe in”

    Could you provide a concrete example of a ridiculous ethical position that professional ethicists (in recent times) have defended, for the primary purpose of sparking controversy?

    1. Thanks, Spencer. I’m glad you liked the post.

      I’m afraid I don’t have an answer to your question because ‘Defend…’ is not a description of any specific individual’s conduct, but it’s an illustration of the worrying incentive structure that professional ethicists in general find themselves in. Fortunately, I personally know no ethicists who’ve defended a ridiculous position for the primary purpose of provoking controversy. This, needless to say, does not mean that the incentive structure I’m concerned with does not exist.

      1. I am quite sure a quick digging into the US presidential elections may give you some concrete examples. While I don’t want to judge the actual merits of the canditates (I don’t even vote there), some candidates do have defended ethical positions in which they don’t believe (or aren’t supposed to believe).

        But, alas, they are not ethicists, and certainly don’t do that for research impact (they should care about ethics, though). Even so, I would say that finding such concrete examples might be well impossible, for the plain wrongness of the deed.

        1. You’re right; examples are more easily found in politics, especially in modern democratic politics where office-holders and office-seekers must be responsive to voters’ preferences. This raises a set of interesting questions. E.g. are democratic leaders more prone than non-democratic ones to defend a position s/he does not believe in? If so, is a certain kind of dishonesty an essential characteristic of democratic leadership?

          1. Good question. It may have something to do with the fact that the current structure of democractic elections privilege not one’s competence or honesty, but rather one’s success in persuading the voters. Persuasion, as we have been seeing since Socrates, has no connection with the truth.

            On a similar note, is that what the people want? I have never seen anyone praise one’s president for his or her honesty.
            Much more important attributes seem to be his negotiation skills, foresight, religion and knowledge of economy.

  5. It’s not obvious what questions are worth pursuing. Even if we come with a minimal list, certainly, some will disagree. Is more efficient to ignore questions who doesn’t seem relevant, then creating more heat.

    We could enumerate questions who are worth debating, and assume “that’s is the list”. But, to something become a consensus, time is needed, and much evidence too.

  6. Dear Kei,

    Thank you for your sugestive piece of ‘the Rationalsit Prejudice.’ A noted ethecist, a frined of mine once confessed that he defended ethical positions in which they didn’t believe just to activate discuusions at confrences or his TV talk shows. This really disappointed me. Your piece reminded me of the criticsm by Prime Misniter Shigeru Yoshida(1878-1967) to Dr. Shigeru Nambara(1889-1974), the first President of the newely establshied University of Tokyo. Yoshida wished to conclude peace treaty with the countires which would admit Japan’s independence. But Dr. Nambara prosposed to make peace treaty with every country of the then world including the Soviet Russia. Being irritated by Nanbara’s unrealstic discourse, Yoshida criticized him by calling him ’曲学阿世.’ This was an old Chinese words for a learned time server. I can agree with Kei’s points but I also think that 曲学阿世 are improtant for the advancemnt of our soceities. In the above case, Yoshida was politically right for Japan regained her independece by his decision. On the other hand, Nambara was throretically right. We need both approaches to any issues. That atttiude protects democracy. I would be sorry if my remraks does not responde to your discourse.

    Shigeru Aoyama

    1. Thanks for your comments, Shigeru. I agree with your view that the adjudication of competing claims is an essential part of democratic politics. I also agree with your historical observation that the two options discussed in Japan prior to the signing of the San Francisco Peace Treaty – known as ‘katamen kowa’ and ‘ryomen kowa’ – were both genuine options that deserved serious consideration. Having said that, I would add a further point: political decision-making and philosophical reasoning are essentially different. Statesmen must consider various factors that philosophers can safely ignore as irrelevant; above all, they must semi-instinctively be able to see which options on the table are feasible and which are not. Was Yoshida a great statesman amply endowed with such a ‘sense of reality’? This is an interesting question to think about – unfortunately, I do not know the answer.

Comments are closed.