Skip to content

Guest Post: Consequentialism and Ethics? Bridging the Normative Gap.

  • by

Written by Simon Beard

University of Cambridge

After years of deliberation, a US moratorium on so-called ‘gain of function’ experiments, involving the production of novel pathogens with a high degree of pandemic potential, has been lifted [https://www.nih.gov/about-nih/who-we-are/nih-director/statements/nih-lifts-funding-pause-gain-function-research]. At the same time, a ground-breaking new set of guidelines about how and when such experiments can be funded has been published [https://thebulletin.org/new-pathogen-research-rules-gain-function-loss-clarity11540] by the National Institutes of Health. This is to be welcomed, and I hope that these guidelines stimulate broader discussions about the ethics and funding of duel use scientific research, both inside and outside of the life sciences. At the very least, it is essential that people learn from this experience and do not engage in the kind of intellectual head banging that has undermined important research, and disrupted the careers of talented researchers.

Yet, there is something in these guidelines that many philosophers may find troubling.

These new guidelines insist, for the first time it seems, that NIH funding will depend not only on the benefits of scientific research outweighing the potential risks, but also on whether or not the research is “ethically justified”. In defining what is ethically justifiable, the NIH make specific reference to standards of beneficence, non-maleficence, justice, scientific freedom, respect for persons and responsible stewardship.

Much has been made of this additional dimension of evaluation and whether or not review committees will be up to assessing it. Whereas before, it is said, they merely had to assess whether research would have good or bad outcomes, they now have to determine whether it is right or wrong as well!

To many consequentialists, such concerns carry a sense of the absurd about them. On the one hand, how could genuinely beneficial scientific research not also be ethical? On the other hand, how could a genuinely comprehensive risk-benefit assessment not already be taking into account all the ethically salient features of a research project?

Surely, part of the answer here is that, despite its popularity amongst people working in fields like bioethics and existential risk, consequentialism is not viewed as an adequate conception of what it means to be ethical by many working in the life sciences. Ethics, for them, is something to do with codes of practice, focus groups and the need to negotiate complex cultural, religious and political considerations, and consequentialism just isn’t rich enough to cover all these concerns. Sure, one can operationalise consequentialist ideals into codes of practice, promote them amongst focus groups and offer them as the correct solution to issues in the public discourse, but for many, it seems, consequentialism remains normatively unsatisfying. That is a problem if consequentialists want to change people’s opinions on important issues like the public funding of dual-use biomedical research.

So, what is a consequentialist to do?

One reasonable reaction is to offer a reductionist translation of people’s non-consequentialist concerns into purely consequentialist terms. Indeed, many would see this as the most rational response a consequentialist could make. For instance, one might argue that non-maleficence is about not harming, but that the best way to not harm is to seek to minimise the amount of harm being done, rather than simply to avoid doing harm oneself. Alternatively, one might argue that issues such as ‘justice’ and ‘scientific freedom’ are only of  instrumental value. For instance, as I argued in a previous post here, it is plausible that if ‘gain of function’ research was conducted for reasons that were just then it will also likely be of net benefit, whilst if it is conducted without regard to justice but simply in response to what is most profitable then it is likely to be, on balance, bad [https://blog.practicalethics.ox.ac.uk/2016/04/guest-post-scientists-arent-always-the-best-people-to-evaluate-the-risks-of-scientific-research/]

While entirely consistent with the consequentialists’ fundamental principles, it must be said that such arguments generally fail to satisfy anyone with non-consequentialist moral leanings. Indeed there are good arguments to be made that some non consequentialist concerns could never be justified on consequentialist grounds [https://www.research.ed.ac.uk/portal/files/12473535/BROWN_C_Consequentialize_This.pdf], even if consequentialists were willing to be very flexible about the content of their theories. I suspect that when people raise issues about non-maleficence and justice, they do so explicitly to oppose consequentialism, so that any reductionist approach of interpreting these terms as consiquentialist will be seen as obfuscation, if not fraud.

Alternatively then one could take up an approach, like that proposed by Will MacAskill, of granting that we do have both consequentialist and non-consequentialist leanings and that we may, therefore, have reason to give some credence to both of these ethical theories. Under such circumstances, this approach suggests, the correct response is not to simply adopt one theory to the exclusion of all others, or to try and find a way to satisfy all moral theories simultaneously, but to work out the Expected Moral Value of an action (that is its value according to each moral theory we have reason to accept, multiplied by the strength of reasons we have to accept that theory) and to maximise that instead [https://blog.practicalethics.ox.ac.uk/2012/01/practical-ethics-given-moral-uncertainty/].

One might then argue that, for many globally important issues, such as the irradiation of poverty or the prevention of global catastrophes, the amount that is at stake in terms of human lives and human suffering means that the implications of our actions according to consequentialism will be the prime determinant of their expected moral value, even if our credence in consequentialism itself is low.

Again, I am sympathetic to this sort of approach. However, it strikes me as being most appropriate when used to reconcile competing intuitions behind theories that merely present different ways of aggregating one particular moral value, such as utility, but not when they involve different approaches to value in general. For instance, Hilary Greaves and Toby Ord have proposed such an approach to deal with disagreements between different kinds of utilitarian about population axiology [http://users.ox.ac.uk/~mert2255/papers/mu-about-pe.pdf]. In such cases, it is clear that there are no normative disagreements about the nature of value itself at stake. When, however, we are trying to judge between more diverse moral theories, such as consequentialism and non-consequentialism, this is a lot more doubtful. Would somebody who was a Kantian or a contractualist even accept the concept of ‘moral value’ as something that could be maximized, when so much of their ethical theory is about the impermissibility of value trade-offs and aggregation?

This suggests a third approach for consequentialist philosophers to engage with such disagreement; they simply have to role-up their sleeves and get their hands dirty resolving it.

This is the approach most notably taken up by Derek Parfit in his monumental On What Matters [https://en.wikipedia.org/wiki/On_What_Matters], in which he argued that different kinds of moral theory were all ‘climbing the same mountain’ and could realistically expect to resolve their disagreements given enough intellectual effort. However, there is nothing especially new about such arguments, they go back at least as far as Henry Sidgwick’s Methods of Ethics. Indeed, my own philosophical career was almost over before it had begun, when as a first year I read Could Kant have been a Utilitarian [https://www.cambridge.org/core/services/aop-cambridge-core/content/view/S0953820800005501] and, somewhat naïvely, decided that the answer was yes.

While, like many philosophers, I am yet to be fully convinced of the unity of all moral theory, this is nevertheless the approach I wish to recommend. Two recent efforts to open up the interface between consequentialist and non-consequentialist ethics from the consequentialist perspective have particularly interested me. Firstly, there is Alex Voorhoeve’s intriguing approach to distributive justice known as the Aggregate Relevant Claims view [http://eprints.lse.ac.uk/55883/]. This view offers a precise and workable justification for distinguishing between those ethical situations in which individual costs and benefits can be easily traded off against one another (as most consequentialist theories insist) and those in which they cannot (as many non-consequentialists demand). Then, there is the first of Larry Temkin’s recent Uehero Lectures [https://podcasts.ox.ac.uk/2017-annual-uehiro-lecture-33-obligations-needy-some-empirical-worries-and-uncomfortable], which made a series of compelling arguments for why, and how, Peter Singers famous ‘drowning child’ argument for utilitarianism [https://www.utilitarian.net/singer/by/199704–.htm] should be situated within wider, non-consequentialist, moral concerns.

My hope is that such efforts can help us produce better moral theories that are able to incorporate even more of our intuitive moral concerns into a rational ethical theory, and that they also offer consequentialists, in particular, the opportunity to engage productively in wider moral debates. Even if we could never fully bridge the gap between different moral theories, I think that this work can at least prepare the ground for either the reductionist or expected moral value approaches to resolve disagreements more convincingly. Furthermore, while I admire the normative purity of the consequentialist approach to ethics, this strikes me as an important step to take, in a world where such a significant statement as the rules governing the development of dual-use research can view consequentialism and ethics as simply two different things.

Share on

5 Comment on this post

  1. As someone who studied under Richard Hare, and who even remembers (favorably) reading “Could Kant Have Been A Utilitarian?” (but who is having a blasted hard time re-accessing it again just now), I would like to ask if this blog is supposed to focus narrowly on consequentialism, pro or con, or if it is intended to address the much broader question (surely of universal concern) regarding just how an ethical assessment of “gain of function” research into potentially lethal human pathogens should be done. I can understand why the new guidelines may seem confounding to one steeped in consequentialist theory (“Whereas before, it is said, they merely had to assess whether research would have good or bad outcomes, they now have to determine whether it is right or wrong as well!”), but I would first have to ask, before even beginning to consider the issue at hand (“gain of function” research), how familiar the ethical theorists who may be confounded by this charge actually are with the kinds of things that eventually do make it into the “calculation of risks and benefits” when policy decisions are finally made. As an environmental philosopher, it has been my experience that one of the dangers of “risk-benefit” analysis is that it often morphs into fairly straightforward monetary calculations when the rubber starts meeting the road. What all must enter into “a genuinely comprehensive risk-benefit assessment,” and how does one negotiate between the ethical and the economic? How are the long-term consequences (on the human species and on the biosphere at large) of actions with admittedly unpredictable outcomes (such as many of the “social experiments” we are rather sanguinely embarking upon now, fiddling with CRISPR technologies and so on, clearly have) going to be considered and weighed in view of both (explicitly addressable) shorter-term financial incentives and (less explicitly addressable but all the more powerful because of it) social pressures aimed at furthering careers? Even in this post, I am a bit concerned about the referent of “This is to be welcomed” (the new guidelines, or the lifting of the moratorium?), as well as the desire to avoid “the kind of intellectual head banging that has undermined important research, and disrupted the careers of talented researchers.” Perhaps there are some kinds of technologies that we as a species should simply decide NOT to employ, even if it means that the trajectories of certain “careers” will have to change.

    1. Dear Ronnie, thanks so much for this thoughtful comment. The piece is intended more as a discussion of consiquentialism but it is bourn out of a general dilemma I have about how to respond to this kind of situation. I am definately aware of the limitations of Risk Benefit Analysis and of the complexities of this reasearch, even if I am using this particular example simply as a way in to talking about something else. However I am also a card carrying and committed consiquentialist. The dilemma I feel then is how to engage in these debates. On the one hand my consiquentialism makes me feel that my job is simply to advocate for better and fuller risk benefit analyses. I very much dislike finding myself in the camp of people who want to say things like ‘some sorts of technology just shouldn’t be used’ because that kind of prohibition simply strikes me as irrational and unjustifyable. On the other hand I am well enough aware of the dangers here that I agree that getting a risk-benefit analysis wrong, even by some small extent, is probably far worse than a simple prohibition, and that there are many people out there who have a vested interest in seeing that these things are not done well.

      This is why it matters to me that consiquentialists can engage productively with non-consiquentialists. I simply refuse to choose either to reject all the legitimate points that critics of risk-benefit analysis have to make about the way these things are carried about or to reject what seems to me to be the best and most rational underpinning for the assessment of technological risks. Hence my desire to try and bridge the normative gap between consiquentialists and their critics. It’s a personal dilemma, but I think it is nevertheless an important one.

      Finally, on the head banging point. I am referring in this case to the particular intelectual deadlock that seemed to emerge surrounding Gain of Function research, not to the fact that legitimate questions were raised. Part of this deadlock emerged because we are dealing with novel technologies and unprecidented risk, part of it emerged because of the interaction between issues that people felt belonged squairly in the scientific realm and wider ethical and social issues that surround them and partly I think it occured becuase clever and well meaning people talked past one another in louder and louder voices rather than finding reasonable grounds for working out their differences. Next time I hope we will be better able to deal with this and that is what I meant when I said that I welcomed this resolution, both that the arguments had worked themselves out and that they seemed to have produced something genuinely promising and new.

  2. Thanks for your response, Simon, and for introducing the topic on this blog. Since I am a newcomer here, I was not privy to the earlier discussion, or “headbanging,” regarding “gain of function” research. In fact, even the term is new to me–why not call it research into lethal human pathogens? Is the term itself designed to conceal what’s really being talked about? And please tell me–is there a new technique that has been introduced to enable the new pathogens to “gain function”–as in the ability to spread and/or kill/escape from present conditions more readily? What sort of “function” is being “gained”?

    But, be that as it may, I’d like to ask, why must we continue to think and talk in dualistic terms like “consequentialist and nonconsequentialist,” when the spectrum of ethical theories is far more diverse than that? Moreover, when dealing with issues concerning “emerging technologies” that potentially pose great harm to humans and other forms of life, why must we try to consign our thinking to one or more abstract “theories,” an approach that in some respects already puts a person at one or more removes from their lived context (which is certainly of relevance when issues of such an existential nature are under consideration)?

    You express a worry that there are “many people out there who have a vested interest in seeing that these things are not done well,” but (perhaps because I have not been privy to the discussion) so far I fail to understand what sorts of “things” we–humanity–should embrace even if they ARE “done well.” What, specifically, needs doing in this regard? I can think of a lot of things that “need doing”in the big picture: we humans need to wean ourselves off of fossil fuels fast if we’re to avoid unsurvivable climate change, we need to slow down and eventually stop our rate of growth in both population and economically driven resource consumption if there’s to remain a habitable Biosphere–things like that. Does this initiative in some way fit into returning our species to a sane trajectory (I hope, in the case of this kind of research, it’s not being thought of as a way of attaining the latter goal, rapidly–or is that OK if it’s only, supposedly, for “the enemy’s” population!)?

    Also–perhaps you could address this from within consequentialism–when I look at what seems to have been gained, the “benefits,” from two other examples of “emerging new technologies,” bioengineering and nanotechnology, I’m afraid I fail to see the much-touted gains so far. Have our lives been made significantly better by GMO corn and soy (and the accompanying millions of glyphosate-soaked acres of land), or by nanoparticle-ized titanium dioxide sunscreens? Please. I have no doubt that lots of money has exchanged hands as a result of their introduction, of course, and many “careers” have been made, but could you give a thumbnail sketch of how a “comprehensive risk-benefit analysis” would weigh the relevant factors regarding these technologies, which have been out for a number of years now?

    1. Dear Ronnie, thanks again for this reply. There is a huge amount here and you’ll have to forgive me for simply ducking most of these questions – sorry I just don’t have all the answers for you!

      On the term ‘gain of function’ – this is a terrible term for the research under discussion as many people have noted. I only use it in this context because that is what the debate has tended to be known as outside of specialist circles. “Dual Use Reseach of Concern” is a much better lable for what is being discussed. It is dual use because it could have good or bad outcomes and it is concerning because its bad outcomes could be very bad indeed.

      On the poorsity of dualistic thinking in ethics, I totally agree with you about the danger of getting trapped in an ‘either one thing or the other’ method of thinking. I think however you go on from this to suggest that there is something wrong with theorising about ethics in general and there I would not agree. Yes all ethical theorising will be in some sense reductionist and lose some of the nuiance of our lived context, however I still think it is very helpful for gaining a clear insight into the things that matter and the things that don’t matter. One does not need to go very far down the rabbit hole of ethical theorising before you ralise that many of your intuitive and contextually based modes of reasoning can be deeply flawed, as movements such as Effective Altruism clearly demonstrate. If this causes us to step outside of our lived context from time to time (in a ‘cool hour’ as Sidgwich put it) then I think that is a very good thing indeed.

      On the things that need doing most, I would say that the irradication of poverty and the worst forms of human suffering together with the preservation of our species are the stand out candidates, although we should also be very concerned about our impact on the environment and the welfare of non-human animals. I think that it is very easy for those of us who live in relatively healthy western socities to dismiss the need for research that could irradicate pandemic pathogens like influenza due to their lack of relevenc to our lived context, but that ultimately this is such a pressing concern that it is worth accepting some risks if they could significantly shortern the time it takes to achieve protection from global pandemics, which are a significant threat to our future and a great burden to many of the worst off in the world (although admitedly that is quite a big if).

      And on what I well conducted Risk-Benefit Analysis might look like and how to apply it to GMOs and Nano Technology, that is a huge question and the best I feel I could do would be to point you to another article I wrote on the topic, although that is really nothing more than a gesture in the right direction http://quillette.com/2017/11/28/should-we-be-worried-about-gmos/. My research may or may not yield something more comprahensive in due course.

  3. Dear Simon–thank you for your response. I see it is dated March 24, and I don’t know why notification of it is just now coming to me on April 17, but I’d like to reply, very briefly, if I may.

    I see you say that your candidates for “things that need doing most” are “the [e]radication of poverty and the worst forms of human suffering together with the preservation of our species are the stand out candidates,” while mine are “to wean ourselves off of fossil fuels fast if we’re to avoid unsurvivable climate change” and “to slow down and eventually stop our rate of growth in both population and economically driven resource consumption.” I think the difference here is that we are using two rather different paradigms, both ontologically and ethically. Yours is Anthropocentric; it starts from a perspective “within” the human species, as abstracted from its larger, biospherical context; mine is Biocentric, since what I’m putting at the center of both my ontology and my ethics is the phenomenon of Life in its many forms, as they have evolved on this planet, as they have functioned together ecologically across eons of geological time. We humans are a part of that life, but only a part, and the more we increase our numbers and take living space from other forms of life the greater the systemic destabilization becomes–making it all the harder to “eradicate poverty,” although continual population growth without limit tends to frustrate the attainment of that goal all by itself, as J. S. Mill was well aware in his day.

    As for the research that “could irradicate pandemic pathogens like influenza,” I do not doubt the likelihood that there may well be “global pandemics” in our future, as we continue to build up massive slums with poor sanitation and little health care around the burgeoning urban centers that are scattered across the globe now. This was not smart! But somehow, clinging to an Anthropocentric paradigm, refusing to look at ourselves “from the outside” and grasp what we’ve been doing in the larger picture, that’s where we are now. If we continue on in this direction sooner or later something is going to restore a species balance to the planet, and the third horseman, pestilence, might be more merciful than some of the alternatives. It would be kind of ironic, I suppose, if the “Dual Use Research” turned out to be the source of such a pandemic, but the consequences of a number of other abundant-benefit-promising “emerging new technologies” in their day are turning out to be somewhat unexpected and less than desirable –just look at “Plastics”! (Wouldn’t we call them back now, if we had the chance, in light of the Great Ocaeanic Garbage Patches, and the microplastics contaminating every food web on Earth?)

    I have nothing against “ethical theorizing,” as long as it’s done in cognizance of our overall ontological situation. Without being able to see that larger picture, we’re just whistling in the dark.

Comments are closed.