Guest Post: Alexander Andersson, MA student in practical philosophy, University of Gothenburg
Email: gusandall[at]student.gu.se
In Unfit for the Future: The Need for Moral Enhancement, Ingmar Persson and Julian Savulescu argue that we, as a human race, are in deep trouble. According to the authors, global-warming, weapons of mass destruction, poverty, famine, terrorists, and even liberal democracies candidate as components in our potential apocalypse. These issues that we are facing require us to be able to make the morally right decisions, however, our current moral deficiencies seem to prevent us from making those decisions. As the authors put it:
[H]uman beings are not by nature equipped with a moral psychology that empowers them to cope with the moral problems that these new conditions of life create. Nor could the currently favoured political system of liberal democracy overcome these deficiencies. (Persson & Savulescu, 2012, p. 1)*
It is therefore desirable to look for means or solutions to get rid of these deficiencies, which in turn would make us morally better persons, thus allowing us to avoid the disastrous situations which otherwise lies ahead. Luckily, Persson and Savulescu do not seem to suffer from moral deficiency, which enables them to put forth a creative plan to save the day.
The proposed solution is moral bioenhancement. In short, this means altering peoples’ moral psychology with biomedical means in order to make people more disposed to do the right thing (or at least less disposed to do bad things). Oxytocin is mentioned as a contender for the procedure which increases trust and trustworthiness (although it has not yet shown any promising results for this purpose).** Selective serotonin reuptake inhibitors (SSRIs) are also used as an example, which makes people less prone to accept unfair offers, but it is hard to know if this is due to a greater sense of justice or just an increased sense of irascibility. The fact that the authors cannot point out any concrete methods for enhancing altruism, sympathy, or a sense of justice makes their suggestion less feasible in my mind. However, the authors excuse this fact by stating:
There are then prospects of moral bioenhancement, even if so far no biomedical means of moral enhancement with sufficiently precise effects have been discovered, and perhaps they never will be. However, it is not surprising that no straightforward moral enhancers have hitherto been discovered because research into moral enhancement is a tiny field that is only a few years old. (Persson & Savulescu, 2012, p. 121)
Apart from the obvious difficulties in forcing the morally corrupt to become pill addicts for the greater good and judging peoples’ capabilities from a moral high ground, this, the book’s major idea, suffers from a fundamental problem of practicality and implementation.
Given the premise that people are bad – where should this massive project of moral bioenhancement take off? The task of enhancing peoples’ moral psychology ought to be carried out by a world-wide institution governed by strict principles and policies, if ever the problems pinpointed by Persson and Savulescu are to be taken on. But, if we design policies before the moral enhancement takes place then these policies will be defect, or at least influenced by our moral deficiencies. On the other hand, it seems hasty to enhance people before we design any principles since the implementation of moral bioenhancement must necessarily be monitored or governed by some institution as well, and that institution should not be morally defect. No matter how we twist and turn this case there will always be a shadow of moral deficiency present in the authors’ massive project. And, if Persson and Savulescu were to retire from their original premise in order to avoid this issue, then there would not be any need for altering peoples’ moral psychology in the first place.
——
* You could argue that ought implies can and that there are no moral obligations to be found since the authors state that we are by nature unable to cope with the circumstances. Thus, we cannot be obligated to do something we are unable to do. However, the counterargument goes as follows: if moral bioenhancement were to become a reality, it would make us able to cope with our current issues, ergo, we are in an indirect way able to solve these problems.
** It is also not evident how trust and trustworthiness is relevant to the problem.
I’m sympathetic to your main point, Alexander; we have to wonder whether fallible world governments can be trusted to morally improve their societies. I wonder, though, if you’re willing to extend this to old-school methods of moral education? Moral education should similarly presume that people are bad, and can be morally improved with certain educational approaches. But this means parents/school admins are bad too, so a similar problem (moral reform run by those with moral defects) emerges.
One could insist that, with moral education, it’s the kids that are bad – not the adults. But then, your argument wouldn’t apply to a (plausible) mode of moral enhancement, one that targets children (esp. prenatal genetic intervention). And anyway, I find it implausible that adults are moral saints that need no improvement.
For my part, I’m willing to accept the conclusion – fallible agents shouldn’t be in the business of imposing value on others (at least not directly). Solutions for getting a more moral society should be more indirect.
Thank you for your input Owen. Like you say, it is just fair reasoning that traditional moral education will be as impotent as moral bioenhancement if we accept the premise that people are bad. However, I do believe that the latter involves greater risk and uncertainty than the former. But it is definitely possible to disagree. Here are some thoughts on the subject:
(1) If we are sticking to the authors’ premise that people are bad then we’ll be stuck in a paradoxical circle. We are bad and we can’t get better since any attempt to approve ourselves will be tainted by our deficiency. And yes, this goes for traditional moral education as well. Just to clarify, even if we are moral deficient people then there will be people who are relatively better within this thin measure. Thus, my point is that moral bioenhancement is a risk that we should not be tempted to take since traditional moral education does not involve that risk. And if the result is the same then it just seems irrational to follow through with moral bioenhancement. Also, moral discourse seems to have a seemingly natural progression between discourse and implementation while moral enhancement seems to be delivered as a “final” product.
(2) If we were to reject the original premise then it is possible to enhance ourselves traditionally, which I do believe that we can (for example, I believe that we have become morally better persons since the Dark ages). Once again, moral bioenhancement involves too many risks and dangers, even though our moral behavior might be getting better a lot quicker if we were to implement it, and that’s a sufficient payoff in my opinion, therefore I would also stick with traditional moral education in favor of moral bioenhancement.
I hope that I answered your questions.
Noticed a little mishap at the end of my comment. I was meaning to write that the efficiency of moral bioenhancement is not a sufficient payoff in my opinion.
Thanks for the thoughtful reply. On consideration, I wonder if there’s some ambiguity on the content of the premise that people are ‘bad’. I took the premise to be, simply, that people are fallible – the negation of which is implausible. But you seem to understand it as something like, typically people make bad moral decisions over good ones. That is very pessimistic – and I don’t believe it’s implied by Persson and Savulescu. It might be that, the great majority of the time, we make good moral decisions. The problem is that sometimes, we fail – but fail *catastrophically*. And due to the asymmetry between the aid of good decisions and the harm of bad ones, the risk of those failures makes our current moral decision-making framework dangerous. So: we can use our usually-reliable judgments to formulate a program to root out immorality. This is why we can be justified in allowing legal regulation of behaviour for moral reasons (NB: I take social well-being to be a moral reason to enact laws).
The risk of getting the enhancement wrong, of course, is still there as long as we accept fallibility, and I agree that should give us pause with these sorts of programs. But you claim the risk is (substantially) greater for moral enhancement than moral education. Why is that the case? Because it might make more drastic changes to moral ideas/do so quickly? But that’s the whole point of moral enhancement – to succeed in (say) addressing global terror and climate change where traditional means have failed to alter things quickly/effectively enough. And even if the risk profile is still too great (maybe you like the precautionary principle), you should still be at least OK with “weak” moral enhancements – interventions that budge people’s inclinations by a few percentage points or so (to whatever magnitude as with traditional moral enhancements). Uncertainty still looms and might militate against gung-ho deployment of currently-available enhancers, but that just means (as supported by Persson and Savulescu) we need more research – as there has been considerable work on moral education.
What I mean when I state that people are bad (also what I think is implied by the authors) is similar to what you oppose here. It might be the case that I have a “Hobbesian”-reading of Unfit for the Future but that is just the most plausible reading in my mind. What Persson and Savulescu points to are systematic errors in our world – global warming, poverty, famine, racism etc. It seems fairly unlikely to me that these problems arise due to the occasional “catastrophic” moral behavior instead of a limited moral psychology. There is a limit to exactly how altruistic, just, fair, motivated etc. we can be. Do not get me wrong, we can still act morally good – but we cannot act as good as our current situation requires us to act. Let me illustrate my point. Someone who is a racist is probably not an occasional racist, he is probably racist most of the time (or all of the time). This could be due to a lack of empathy, biases, faulty moral reasoning, or a combination of these (just to name a few options, it’s not an exhaustive list). I do not believe, as has been proposed here in another comment, that people are weak. The concept of weak moral psychology might work fine when we apply it to the smoker who cannot stop smoking, but in the case of racism, sexism, or intentional unnecessary pollution (e.g. someone who drives a pimped SUV because it’s cooler to have swag than save the planet) it does not seem to be the best explanation of the case. I find it hard to believe that a racist is racist due to lack of moral motivation, he probably acts according to his moral judgments (which are racist), but, of course there might be some exceptions to this rule (peer pressure etc.). Once again, I am not stating that I understand it as people are definitively bad (I agree that it is an ambiguous term), but most people are simply not good enough. The moral psychology that we are equipped with might have been good enough when we weren’t facing issues of this grand scale, but now it needs to be enhanced, and the best way to do that is moral bioenhancement (since it is assumingly efficient amongst other reasons). That’s my reading of the presentation of the problem. But, as you have stated, you could probably read it with more kind eyes.
I find it interesting what you say about a “weak” moral enhancement but I am still reluctant to the fact (maybe I have conservative intuitions?). Still, if you present moral bioenhancement as 100% efficient and totally safe you could argue that it violates autonomy (I think John Harris pushes that point to the limit). And, in comparison to traditional moral enhancement (e.g. a lecture) I could always choose to be ignorant and stop listening to what is said. In sum, it seems to be a clash between rights and consequentialism, and I hold the former dearer than the latter. Another problem: What if we just increase moral motivation? Make people more committed to their moral judgments – wouldn’t that make people with bad beliefs even worse? And, if you argue that we ought to change their beliefs as well as motivation, then what is the difference from brain washing? And if you want people to act as you want there are a lot of cheaper methods than moral bioenhancement.
Thanks for valuable input! I hoped that I asked your questions.
What about the following scenario (very loosely based on Greg Bear’s science fiction novels Queen of Angels, Slant and Moving Mars)?
A moral enhancement therapy is developed, tested and demonstrated to be efficacious. Some nice people of course use it (but they do not need it). Most people feel they do not need it themselves.
However, for certain jobs being able to show that one has moral enhancement is an advantage. If applying to be a policeman, lawyer, doctor, teacher or any other job where your morals might be relevant, enhanced people are more likely to get the job. Financial institutions would love to reduce operational risks by having enhanced people around. No doubt there are jobs like used car salesmen or armsdealer where being enhanced would be a disadvantage, but most popular jobs seem to be in the moral advantage group. Even as a politician being able to show that you are morally enhanced would give an advantage; whether that outweighs the disadvantages to tactics depends largely on political system. Hence people on the margin will choose the enhancement: it gives them better opportunities. They might think they are moral enough already and are just getting a slip of paper proving it, but in reality they are getting an improvement.
As more and more jobs become done by morally enhanced people the stigma of not being enhanced becomes worse. Hence it becomes rational for more people to take the therapy. This is a purely positional aspect and independent of the therapy actually working. But if it also works we should expect the enhanced professions to act better: it is in the interest of everybody that uses them to make them more ethically enhanced, so expect demand for enhanced teachers, doctors or security guards. Also, presumably ethically enhanced people would like to see this happen (1) at first (when they are still relatively few and not competing much) to better their own chances, and (2) because they of course like to promote ethical behavior.
So while this dynamics is not guaranteed to convert everyone (there are interesting economic modelling to be done here, especially in the interface with the shadier occupations) it looks like the returns to trustworthiness and other enhanced traits would lead to a society that had a fair amount of morally enhanced people. Who would presumably not just be enhanced on the job but in their spare time too – while voting, while making economic choices, while rearing their children and interacting with others.
It is not clear to me that we need 100% moral enhancement takeup to get useful effects on a global scale.
I don’t think the incentives align well here: “Moral” is underdefined as a category. In the marketplace of getting jobs or votes, it might mean things like “truthful” or “loyal”. Some of these are at odds with some conceptions of morality. Perhaps a company wants a loyal employee, perhaps an electorate wants a citizenist and patriot, perhaps a government wants someone with proper ideological affiliation. All of these may be at odds with a morality that cares about the interests of everyone else.
In theory, you could set the right incentives as a form of charity: You could pay people to have the “right kind” of moral enhancement. But this is just a stronger form of advocacy, and it suffers from the same bootstrapping problem: All inhumane ideologies can do it too.
Fascinating stuff. Innovations in institutions can deal with some of the things you’re talking about. Creating a shared economic destiny for France and Germany seems to have stopped these guys beating the crap out of each other (for now, at least). Democracies generally avoid getting in wars with other democracies. Nuclear powers generally avoid getting in wars with other nuclear powers, too. Plus, global institutions can point to successes that would not have happened in their absence: the eradication of smallpox, for instance (smallpox killed ~300M people in the 20th Century, and then it stopped because people collaborated to make it so; if you stacked the bodies into piles, that six lots of WWII).
I don’t see how making people more morally sensitive* would help where we cannot agree at the scales we need to agree about values. Climate change, for instance, brings some values issues into play. Fair to say India and the United States struggle to reach agreement about how to conceptualise climate change as an ethical problem: for India it’s an issue of colonialism/climate justice; to the US it’s a pollution externality problem. Making people more morally sensitive would seem just to deepen disagreements. Moral enhancement might actually divert attention away from issues like climate change, since most the ways in which climate change kills or seriously harms people are only indirectly amplified by climate conditions. There are other, more proximate causes and morally-enhanced people might (reasonably) go after those proximate causes rather than the amplifier of those causes. According to the WHO’s 2009 global health risks report, climate change kills people primarily via bad sanitation, malaria and schistomiasis. So it may be that morally-enhanced people would go after these more proximate issues in preference to going after the amplifier (climate change), just because they cannot agree about values, but that can agree that malaria is bad.
The idea of moral enhancement strikes me as a “deficit theory” (to borrow from the literature of science and technology studies) – it’s an idea that’s based on the problem being that we lack something (information, in some cases, or maybe moral goodness in this situation) and that we can solve lots of problems if we fix that deficit. But this very frequently doesn’t work, especially when what’s really preventing agreement is a suite of other barriers, such as lack of agreement about what our shared future may be.
And that’s where institutional innovation strikes me as preferable to taking drugs to make us clever or more moral, exactly because you cannot avoid actually confronting some of those values disagreements and working out ways around them. But the fact that institution-building is hard – especially where crucial values are deeply contested – shouldn’t mislead us into thinking it’s hopeless. The smallpox campaign needed the institutions (though a little bit more kumbaya wouldn’t have hurt and may have helped), because individuals lack the mandate and the ability to coordinate their way out of complex problems.
*Does anyone else think this might be a synonym for gullible?
I agree with Owen Schaefer and would be reluctant to accept the premise that people are bad. A better premise might be that people are weak. If someone is weak she may desire to do the right thing but be unable to implement her desire. A smoker may desire to give up smoking but be unable to implement her desire. If a smoker really desires giving up smoking then she might use nicotine gum or patches to achieve her desire. Accepting the above implies that some people who see themselves as morally weak might desire to be better people and would willing partake in biomedical moral enhancement. The question is how many people would be prepared to do so. Let us accept that provided someone sees herself as morally weak she has an incentive to improve herself. She has an incentive to partake in biomedical moral enhancement, provided such enhancement becomes available and is both safe and effective. The question now becomes how many people would see themselves as morally weak. An alternative means of dealing with existentialist threats such as global warming is suggested by Thomas Wells writing in the magazine Aeon. He argues that because future people will have interests that will be affected by our current policies they should have some affect on our election procedures. I am somewhat sceptical about Wells’ approach but have contrasted these two different approaches in wooler.scottus.
In their paper ‘The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity‘, Persson and Savulescu finish with “If safe moral enhancements are ever developed [they] would be compulsory.” They then drift off to Narnia to get this little thought blessed by C.S. Lewis. (See their ‘The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity‘. Journal of Applied Philosophy: Vol. 25, Issue 3, p. 177)
It might be possible to get a small cabal of the so-called professionals and experts that they so admire to agree on the theory of moral enhancements, and it might also be possible, in the case of psychotropic drugs for example, to get Big Pharma to claim it had produced a moral enhancement drug to the specifications set by the professionals and experts. In the world behind the wardrobe this elixir might well be safe and made compulsory, but in the real world there could be no agreement beyond the confines of the cabal on the theory or indeed the need for moral enhancement, and there is no possibility of producing a powerful psychotropic drug that does not have side effects. We can be reasonably certain that the majority of the population would not accept that moral enhancement technology could be correctly determined as being ’safe’ (they would of course be correct in this belief), so the whole idea is a complete non-starter.
Moral enhancement may have some worth as a thought experiment to illustrate how quickly technological fixes fall apart. Beyond that I am at a loss to know why anyone can take this nonsense seriously. I am sure I will be accused to attacking a straw man, but I have looked at this proposal from all directions (I even bought their book) and cannot find any merit in it.
It’s true that a central problem with humans taking charge of the future evolution of human nature is that it entails first becoming sufficiently co-operative and well-intentioned, on a global scale, to be able to successfully administer such a project. Which will presumably depend on significant change having occurred without such assistance. When (and if) the science and technology for such evolution becomes available, we may find that those who support it may need to extricate themselves from the rest of humanity by some means, in order to have any chance of success.
‘When (and if) the science and technology for such evolution becomes available, we may find that those who support it may need to extricate themselves from the rest of humanity by some means, in order to have any chance of success.’
Let me see if I get this right – moral enhancement technology may be for the few who will then extricate themselves from the rest of humanity. If that is the plan, I assume your version of ‘moral enhancement’ is something more akin to becoming Übermensch. This is not the plan Persson and Savulescu have, but it certainly does not surprise me that dreams about futuristic moral enhancement can be viewed as a means to rerun the nightmares of the past.
What I meant was that people of a “morally enhanced” community could well prove a particularly vulnerable target for those not so enhanced. And there would be little chance of creating a better society for all when only the supporters of such enhancement have taken steps in that direction. If if becomes possible to take human nature in various different directions, there would presumably be a wide diversity of programmes underway in different regions, under different regimes etc., some seeking to achieve the opposite of “moral enhancement”. Those choosing the most peaceful and co-operative options may find they have to leave Earth entirely in order to be safe from the rest of the population. All very far-fetched, I admit, but so is the idea that we’ll somehow achieve the degree of global agreement and co-operation needed to run ” moral enhancement” on the scale of the entire species.
Yes it is far-fetched to assume we could achieve on a global scale moral enhancement by means of a technological fix. That is because, as I briefly indicated, there are good reasons why we could never reach a consensus as to what would be moral enhancement, whether it is needed, and whether it would be ‘safe‘ to use. Threatening compulsion, as Persson and Savulescu do, cannot by any stretch of the imagination be described as being part of the ‘most peaceful and co-operative options’. I think you need to consider who is issuing the threats before you take the moral highground. Interesting that your solution to the dangers you perceive is yet another high tech fix. Would it not be easier to take the traditional escape and hightail it up a mountain?
“Threatening compulsion, as Persson and Savulescu do, cannot by any stretch of the imagination be described as being part of the ‘most peaceful and co-operative options’.”
I haven’t read their book so am not in a position to evaluate their stance (or the technology they discuss – my ideas are based on genetic engineering rather than “pills”), but there may be ways of making a standard set of positive enhancements compulsory without any use of force – for example, offering a wide range of attractive but non-compulsory enhancements, on condition that the recipients (or their prospective parents) also accept the standard set of enhancements. There’ll still be people within such societies rejecting any enhancements, but their numbers would soon dwindle when they find themselves at a disadvantage due to the increasing numbers of people with enhanced intelligence and aptitudes etc. But there would still be the problem of entire nations whose governments refuse to get involved for religious or other ideological reasons, and the threat of particularly aggressive regimes engineering a population of enhanced warriors etc.
“Would it not be easier to take the traditional escape and hightail it up a mountain?”
It would be very hard to maintain a modern progressive civilization “up a mountain”, quite possibly surrounded by hostile forces. On the other hand, colonizing another planet would also be extremely difficult and vastly expensive. It’s very difficult to evaluate the practicality of any of this stuff from the perspective of current technology, but the ideas deserve to be explored in advance to the extent that they meaningfully can be.
Persson and Savulescu include all forms of enhancement. I think we should keep to the meaning of ‘compulsory’ as they use it and not try to put a gloss on it.
This is all quite fun and would make for an amusing topic over few pints of beer, but I really cannot take it seriously because it is too easy to make it up without the slightest reference to reality. I think there is a place for nonsense in our thinking, but there comes a time when we must say enough is enough.
OK, let’s just dismiss the potential of future technology as “nonsense” and we won’t have to worry about it. After all, technological progress hasn’t really changed the way we live or presented us with any new ethical challenges over the last couple of centuries, has it? Why should the future be different?
I am very interested in technology and its future uses, but we must try to understand the history of technology, its present condition, and be realists about the future. Weird peculation about people having to ‘leave Earth entirely in order to be safe from the rest of the population’ is nonsense. I do not think that it is a serious speculation and at best might make a plot for a sci-fi film which, as far as I am concerned, the rest of the population are welcome to watch.
Comments are closed.