Skip to content

Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem

  • by

This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Mahmoud Ghanem

The Case For Computer Assisted Ethics

In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.

Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:

(1) Access to relevant and accurate data, and

(2) The ability to draw accurate conclusions by analysing such data.

Both of which are things that computers are very good at.

To make a case otherwise is to claim one of two things: either that humans have access to morally relevant data, which is in some way fundamentally inaccessible to computers, or that humans can engage in a kind of moral reasoning which is fundamentally uncomputable. I will address these two points before moving on to a suggestion of what such a computer program may look like. Finally, I will address the idea that consulting computers will make us morally lazy, by showing how a well designed program ought to, in fact, achieve the opposite.

Criticism of the idea that computers can do (1) is acceptable only if we concede that humans are in tune with some internal morality that cannot be modelled or
understood psychologically or scientifically. In reply to this view I will simply say that even in such a case, computers would be helpful to us because of their ability to aggregate large quantities of data about the world and then to perform (2) on it. That is to provide, as advice for us, conclusions drawn from real world data about complicated moral situations which are hard for us to reason about on our own, such as environmental policy or law-making, while leaving the final decision of which courses of action are preferable to us. As we shall see later, such an approach is in fact very desirable.

Defending that computers are good at (2) is far easier, but in order to do so I will get slightly technical. The limits of what is and is not computable are well studied and well defined, the most famous of such results are those provided by Gödel (1931) and by Turing (1936). A common conceit is to take these limits as “proof” that humans are capable of computing things that computers cannot. This view is, at least in the case of Gödel and Turing’s proofs, rather unlikely to be true, namely because both proofs are highly likely to place the exact same limits on human reasoning as they do on computer reasoning. And even if this conceit were the case, it should seem very unlikely that moral reasoning is a problem that is not effectively computable. The idea that the only kinds of reasoning in which computers are capable of outperforming humans are those employed in purely mathematical tasks is out-dated – modern computer systems exist which can perform probabilistic inferences, give medical diagnoses and compete in game shows.

It seems very plausible therefore that there are no theoretical limitations preventing the creation of a computer that can provide us with moral advice. I will further argue that the case of computer-assisted ethics is not just theoretical, but also achievable with current technology. I will mostly draw a parallel here with medical diagnosis. Computers, which assist doctors in diagnosing patients, do so by having access to a set of preprogramed axioms about the function of the human body and about how various diseases interact with it. From these axioms, and from data about any given patient, the computer can reason deductively and statistically and present a human readable report (capable of providing a suggested course for treatment, giving reasons as to why it thinks the patient has a certain disease, and of explaining why some potential illnesses were ruled out). It seems, then, that to build a computer that does the same for ethics would be a matter of formalising the axioms a preferred ethical system, providing the computer with the relevant data and leaving it to run either a SAT solver or some higher order inference tool (ideally one tailored to ethical inferences).

A person could then consult their mobile phone, in the same way that they might ask their phone for a map to the nearest train station or for advice on where to eat out, for advice on the ethical implications of taking certain actions. In this sense, in the same way that GPS apps can keep us from getting lost, the proposed program could help keep us consistent with our own ethical beliefs.

Care must be taken here to ensure that adopting such a system does not bias us towards ethical frameworks that are easy to formalise or to quantify. Beyond this, the most common concern that follows is that (to take the example of GPS further) by relying on our phones to help us with our moral decisions we will somehow lose our own moral sense of direction. These two questions are essentially empirical in nature, and if a convincing case were made that the either posed a significant risk to our ability to reason morally I would happily concede that the fact that we can build computers that will help us make moral decisions does not mean that we ought to, especially if by doing so, we run the risk of become less moral overall. Given however, that humans who seek and follow moral advice from other humans are more likely to be morally engaged themselves, I am optimistic that an app or program, designed as I have described it above would similarly help make us more moral. I hypothesise that this might happen in precisely the same way that automated medical diagnosis tools can be used to help doctors become better at making medical diagnoses, namely by helping them explore and engage with large datasets which are otherwise too complex for a human to analyse. After all, a common complaint about practical ethics is that there are too many factors for any one person to consider. A similar complaint could be made about weather forecasting, economic modelling and space travel – and yet we seem to be able to do all of the above just fine with the aid of computers.

Simple and trivial examples are moral frameworks such as utilitarianism (for some model of what constitutes utility) or any consistent, axiomatic rule based system. In both cases the app could collect the relevant data about a given situation and prompt its user about the best course of action.

To take a simplified but nontrivial example, consider a person who wishes to only perform actions that help him cultivate the habit of being more compassionate.
Relevant data would include: research about how humans form habits, a large dataset of examples of compassionate actions (which, in turn can be used to train a
statistical model which approximates the user’s intuitive definition of compassion) and actions taken by compassionate role models in similar situations. The person could then ask their computer a question such as:

“Computer. Should I be spending more time with my child or should I continue working overtime at work to help me buy them a more comfortable life?”

To which the computer could reply (for example):

“Based on your ethical goal of BECOMING MORE COMPASSIONATE have you considered working overtime 3 nights a week and spending the remaining two with your son? Based on my data, people who spend at least 6 hours of quality time with their children are 80% more likely to perform actions you would describe as COMPASSIONATE while those who spend significantly more time than that do not seem to see much of an improvement. Those who work overtime every day of the week are only 10% more likely to perform actions you would consider COMPASSIONATE, even when doing so for COMPASSIONATE reasons.”

Note that two important properties have been preserved here. All the metaethically “hard” work of choosing an ethical system has been left to the agent. Furthermore, the computer’s advice provides the agent with justifications for each course of action, each of which relate back to the human’s own defined goals. If the output of the computer ever contradicts with the human’s own intuitions, the human would be provided with an example of how and why the moral framework they have chosen does not match up with what they consider to be moral.

I also believe that such a program would have powerful and desirable implications if applied to the field of moral luck and agent regret. Specifically, an interesting case study in moral luck is the case of a drunk but lucky driver who drives down a road that is often full of pedestrians, and (purely by chance) does not hit anyone, though he would almost certainly have any other night. It is the case that we would like him to be morally blameworthy for taking the unnecessary risk of killing someone. The most common approach to moral luck is to say that the drunk driver is indeed morally blameworthy for taking the risk and that he was lucky in the sense that his moral failing has escaped detection. I would argue that he (and all of us who perform immoral actions but are not revealed to have done so because of epistemic failings) are being morally lazy in an avoidable sense which could be solved very effectively by the use of a computer system as described above.

Consider what the situation would be if the driver had consulted his computerized moral assistant (which he has set up, among other things, with his strong preference not to kill people) and has been informed that the probability of him killing someone if he were to drive down that road while drunk was over 10%. Or even better, consider that the computer told him if he were to drink more than a certain amount that night he would be more likely to drive home drunk and run that very same risk.

This applies nicely to the ethics of operating heavy machinery in general. Suppose every driver had access to a good estimate for how likely they were to be involved in a traffic accident, every time they decided to go out. In such a case, I would argue that the moral regret for traffic accidents caused by maximally safe drivers when they do happen would be (rightfully) shared amongst all those who decide to drive. This has the very desirable effect that we are all more careful drivers.

It follows then, that a computerized moral assistant would serve to make us more (rather than less) aware of the ethical implications of our actions, by removing some important epistemic limitations on our ability to act morally as well as by helping us avoid adopting ethical systems whose implications on our day to day lives we have not properly thought out. So long as we believe that ethics is to some extent something that can be reasoned about – it seems clear that computers programmes that help us reason in complex practical situations, should be developed and consulted to help us behave ethically in such situations.

Share on

3 Comment on this post

  1. Great paper.

    A computer operating system is a form of government that regulates the movement of data within the digital device.

    Obviously, the preferred and optimal application would be to build a governmental operating system for nations to address the wickedly Gordian problem of global warming.

    Humans may not survive it. And worse, we seem incapable of deploying the solutions required. A cybernetic decisionmaker would help.

    “Oh run now and save us Oh Bee Juan the computer application!” It may already be too late.

    The longer humans delay, does the application command more ruthless re-actions ?

    1. Hi Richard, I’m Mahmoud, the author of this essay.

      Thanks for your comment 🙂

      Calling an OS a form of government is a good analogy, but I think that the kinds of systems we use to regulate data on our computers are probably not the kind we want running governments – particularly because systems of this kind, having reached a certain size, are incredibly untransparent. An important quality for AI’s providing advice to humans is that they can outline not only a course of action, but also a human readable explanation of how they arrived at that course of action.

      If you’re interested in Moral AIs used for government you may enjoy a fun (but very philosophically meaty) Asimov short story called “The Evitable Conflict”.

Comments are closed.