Skip to content

Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.

Artificial Intelligence and Human Decision-making.

Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.

One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.

While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)

For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0” next month as part of the Philosophy and Theory of AI conference).

Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.

Moral Outsourcing

At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.

Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.

We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.

So Why Shouldn’t I Trust The Machine?

So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”

It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.

So why not trust the machine?

Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.

This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).

Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.

Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.

1. Transcript available here.
For further discussion on Brooks’s talk, see Foreign Policy Magazine articles here and here.

Share on

16 Comment on this post

  1. Daniel Suarez’s (“Daemon”, “Freedom TM”) latest book “Kill Decision” just came out in paperback this week. It addresses this issue in a compelling and frightening narrative.

    1. Sean OHeigeartaigh

      I’ve just spent a few minutes googling Suarez and “Kill decision” (embarrassingly, I’d never heard of him). I think I’ll be ordering Kill Decision and Daemon.

      Thanks Brian!

  2. Technological innovation always runs ahead of stewardship, and the DOD isn’t afraid to use humans as crash test dummies when a new ‘must have’ capability arises. Machines making the decision to take life fails the complexity test, as the author points out – advanced algorithms fail because they’re so advanced no one can anticipate clashes or ‘normal accidents’ to paraphrase Charles Perrow. But also consider that advanced AI systems will use ‘black box’ AI techniques, such as evolutionary algorithms and artificial neural networks. We won’t be able to precisely anticipate how such systems will behave over time. And that’s the kind of certainty the system and operators must master before even discussing taking lives.

    1. Thanks James, fully agree. By the way, if you’re the same James Barrat, your “Final Invention” just arrived in our office – looking forward to reading it!

  3. This is not only a moral issue, but also a political one. Science fiction can indeed be a good starting point.
    See Yanncik Rumpala, Artificial intelligences and political organization: an exploration based on the science fiction work of Iain M. Banks, Technology in Society, Volume 34, Issue 1, 2012.

  4. To date AI is nowhere near passing Turing’s very simple little test. For decades there have been systems that can work within a “theory” of ethics or law that can do some problem-solving using the thin concepts of a theory. So within this very limited field of expertise they might pass a Turing type test that was further simplified by, for example, instructing interrogators to keep their questions within the systems expertise. Simple real world everyday ethical decision-making involving thick concepts that most people do with little difficulty is of course way beyond their capabilities and they would be unable to pass a Turing test confined to this level, i.e. interrogators instructed to keep to ethics.

    I am not suggesting that the passing of the Turing test should once more become AI’s primary goal. There are numerous areas of machine intelligence where researchers are successfully producing systems that operate in limited domains without ever having to consider the complexities of human intelligence. But notion that that machine intelligence could in the foreseeable future make reliable ethical decisions in an open domain is just another example of AI hype. As Turing correctly predicted,‘by the end of the century the use of words and general educated opinion will have altered so much that we will be able to speak of machines thinking without expecting to be contradicted.’ (Computing Machinery and Intelligence, 1950) Machines are nowhere near ’thinking’, but he was right about how words and educated opinion could be altered to make the myth appear a reality. (His use of ‘thinking’ here is curious but that cannot be unpicked here)

    Please, before we waste time speculating about what we should or should not do about machines ‘thinking’ up grand ethical solutions let us have some real evidence of their ’thinking’. To get that we are, as you raise at the end of your piece, left with the problem Turing unsuccessfully tried to tackle over sixty years ago.

  5. ‘Bias’ is of course a complex notion. It includes the rational outcome of past experience, as well as components like prejudice and training.

  6. Interesting article, particularly regarding moral outsourcing. I wonder how far we have seen this already with advances in stand off munitions. The operator is still responsible for the decision making, yet being more detached from the action perhaps make the decision to kill easier.

    We are also seeing this moral outsourcing with the actual outsourcing of military capability, as per Iraq. Apologies for the plug, but I wrote a book on kindle that explores this (‘Contract for Liberty’). The use of private military firms means less accountability for governments compared to when they employ national armed forces in conflict. The lower oversight by goverments over their military capability could encourage more ‘robust’ tactics and increase the chances of human rights infractions.

  7. Interesting article, particularly regarding moral outsourcing. I wonder how far we have seen this already with advances in stand off munitions. The operator is still responsible for the decision making, yet being more detached from the action perhaps make the decision to kill easier.
    We are also seeing this moral outsourcing with the actual outsourcing of military capability, as per Iraq. Apologies for the plug, but I wrote a book on kindle that explores this (‘Contract for Liberty’). The use of private military firms means less accountability for governments compared to when they employ national armed forces in conflict. The lower oversight by goverments over their military capability could encourage more ‘robust’ tactics and increase the chances of human rights infractions.

  8. Interesting. I did something related to that and reported it on my blog -“Will & Moral Responsibility in Machines : Self-Driving Google Car” (links disabled in comments here, so you’ll have to browse back on mgto dot org). Essentially, I wanted to test how people would react to the Google self-driving car facing the trolley problem. Would people think it would make the moral choice? would they trust it more than they would a human? Is it acceptable for people that a self-driving car would make that decision? who would they hold accountable for whatever decision the self-driving car makes?

    Didn’t follow up on that, but I think it’s a really interesting direction to go…

  9. Forgive me if I sound naive in my comments, because I don’t know very much about AI, but I had a number of difficulties with some of your assumptions in this piece. I’d like to start here:

    ‘It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.’

    I have a number of difficulties with this statement. It seems to me that there is a big difference between having ‘facts’ and being able to make judgements about facts, and this is the difference that separates humans from machines. In making use of judgement, humans have evolved to make productive use of bias, or another way of expressing it would be to say that prejudice, being able to pre-judge something is a way that brings past experience to bear on a particular novel situation. The English essayist William Hazlitt noted that without prejudice he would not be able to make his way across a room. In other words and in all kinds of ways, we have to depend upon all kinds of taken for granted assumptions about the world otherwise it would be impossible to live our lives. The difficulty, of course, is that we don’t always know and can’t always recognise when our biases and prejudices are helpful and when they are not. Just to push the point a bit further, making decisions is not just about having lots of information available, but about being able to exercise phronetic judgement on the information, phronesis, as Aristotle argued, can never be rule-driven because it always concerned with the particulars of any case, even it is drawing on background of more general, rule-governed knowledge.

    So what would it mean, in your case, to make a ‘better’ decision about resource allocation? Again this isn’t a matter of quantity of facts. The current government is making all kinds of decisions about resource allocation at the moment, and in doing so is taking away benefits from people with disabilities and people in social housing deemed to have ‘an extra bedroom’. It might make economic sense to spend as little as possible from the public purse, but is this a good decision? How do we decide what ‘better’ means, or rather, who decides? Resource allocation, choosing a military target have political and moral dimensions which are not reducible to rules. Politics involves power and contestation – it is a process which has to involve other people. In other words, the current government is not just making decisions on the ‘facts’ but is making judgements about fairness and what they can get away with.

    You seem to imply there may be little difference between trusting a decision to another person and trusting a decision to a machine. There is of course a big difference. Trust is a negotiated, co-created phenomenon which has a lot to do with previous experience of another human being, and again, with judgement. Trusting my lap top computer to perform well, or trusting an algorithm to play Super Mario well are trivial examples when compared with trusting a computer whether to target people I consider my enemies in remote regions of Pakistan with drone strikes. There are both facts and moral and political consequences to be considered in the latter case

  10. Humans have been moral outsourcing to machines for millennia. Religious humans outsource to those biological machines they call priests, bishops, imams, wise men, witch doctors.

    But, there is a problem with the understanding or ‘moral decisions’.

    When a non-human machine makes a decision we generally know its limitations – garbage in, garbage out sort of thing. We design those machines and have a general idea of what inputs they can take into account and what outputs they might produce as a result. But still these machines can surprise us. That’s why computer programs often do not do what we expect them to do. But they are not making moral decisions. We might program them to make their outputs, their behaviour, conform to behaviour that we find morally desirable – but we are only outsourcing the analysis and action, not the decision about what is moral. We are still programming them to conform to our morals.

    The benefit of the human machine over electronic ones so far is that human machines can make greater associative leaps. We can keep in our minds some principles we want to hold to, and we can make both rational and heuristic analyses about whether some behaviour is within the bounds of those principles. This works better than the more algorithmic methods of other machines, generally, in that we can see exceptions and weigh them up. But we’re also inconsistent. While a non-human machine might consistently push the fat man off the bridge to stop the trolley, humans will vary in their opinion, across humans and even within one human over time. And humans are more likely to dither and cause a less than optimal outcome, or act rashly and cause a less than optimal outcome. We seem to be prepared to accept these flaws in human moral decision making.

    There’s good reason to I think. A human has sufficient flexibility to correct errors. We can make the mistake of inventing atom bombs, use them, and then realise how bad they are, and then take steps to prevent proliferation. But a simple algorithmic machine will continue to use them if it is programmed with the same data about when to use them. The garbage in-out dilemma is most reliably compensated for by flexible human minds, so far.

    Sadly, often, new humans will make the same mistakes old humans did, by not learning. Scenarios may differ sufficiently to make it difficult to tell if the new scenario matches the old one. Both these issues are contributing to the debate over action in Syria, post-Iraq. Will our new politicians learn from the old mistakes? Is Syria like Iraq or not?

    Non-human machines will go on making the same mistakes until reprogrammed. But once reprogrammed they can stop making the old mistakes. But as yet they can’t take our leaps of principle into novel scenarios.

    “When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.”

    I don’t know how this can be known. The only thing stopping a human machine intent on total destruction is the capacity to carry it out. The same applies to a machine mistakenly programmed to perform total destruction (i.e. going wrong). You seem to presume the dumb sort of machine that elsewhere you suppose we might improve on. Current machines? Yes. But if I simply define a ‘good’ machine as one that is more intelligent than humans and capable of making better moral decisions, then I’ve defined a solution. Too easy? Well that’s exactly the easy step you take in declaring that machine can make ‘more wrong’ decisions – you define the machine as such.

    One underlying problem is that human affairs, including making moral decisions, are complex. There is no end to the data one can absorb that would make one change a decision to act or not. Every scenario has some subtle differences. We invent trolley problem scenarios endlessly. But this endless possibility of indecision is countered, not always successfully, by heuristics that commit us to simplified analyses and decisions: dogma. We can be much better than non-human machines, and much worse.

    But there is no reason to suppose that non-human machines could not eventually develop the combined capacity to be consistent, to learn from past mistakes, to reprogramme new rules for new situations. There’s also no reason to suppose any particular human would agree with such a machine’s decision any more than we agree with each others. Our biggest fear is that machines, Terminator-like, will decide that we are the problem.

    There’s the distinct possibility of humans and machines coinciding at some point. Enhanced human brains and heuristically capable machines might meet at some optimal point – optimal by what standards is then the question, but that would always be the question.

    The difference between non-human machines and us is currently a technological one. Any asserted claim that there is a significant difference from a moral perspective is usually based on a presupposition about morality being some higher order mode of reality, rather than merely an invention of human culture, probably influenced by biology.

    Hume’s is/ought gap isn’t a rejection of dealing with morality in the ‘is’ domain, but rather that rejection of the ‘ought’ domain as this higher domain. The ‘ought’ domain of morality is well and truly in the ‘is’ domain. We just don’t know how to compute it and instead rely on heuristics, feelings, and sometimes dogma. Moral claims are preference claims. Their deep cultural and biological roots cause us to think them beyond the reach of machines in principle, while there is no reason to think they are anything other than messy complex problems that are currently out of reach of non-human machines, and not all that satisfactorily decided upon by human machines.

    “Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.”

    based on history you should already be excluding humans from making moral decisions. You’re not seriously saying such simple machines are less predictable than humans? Seriously? Humans have the uncanny ability to be spiteful and contrary. Thy can flip in an instant based on no obvious reason. At least with a machine you could trace its programme and debug it. The limits of neuroscience to do that for humans is precisely why we don’t yet know how humans process problems like moral ones.

  11. ‘But there is no reason to suppose that non-human machines could not eventually develop the combined capacity to be consistent, to learn from past mistakes, to reprogramme new rules for new situations.’

    There are reasons why this might not be achievable. Computers are hopeless at this type of tack and AI has no workable solutions to the problem. Many AI researchers claim they have a solution and are only a matter of a few years away from a breakthrough, but they have been saying that for over 60 years with nothing to show for it. The problem lies in your assumption that machines can ’learn from past mistakes’ and can be reprogrammed with new ’rules’ for new ’situation‘. Philosophers have been arguing for centuries about induction; what is a mistake?; rules and what is it to follow a rule?; the complexities of what makes a “situation”?…etc.. Indeed, in order to get a machine to this level AI still has to solve other little problems like perception, selfhood, consciousness and, contrary to what Turing thought, problems like ’what do strawberries taste like?’

    I do not think AI should devote its energies to passing the Turing test. If AI researchers would admit that human intelligence is way beyond their understanding they could get on and produce very intelligent machines. Good technology learns by its failures and develops the strongest lines of research (aeronautics did not remain stuck on the ground trying to crack bird flight). These machines could not pass a Turing type ethics test and could not be relied upon, even if we were so foolish to desire it, to be “ethics problem solvers”. (Here again we encounter a heap of philosophical questions, but enough said). For sure we could go back to pre-enlightenment times and allow others to do our thinking for us (’human machines’, as you put it, being replaced by ’non-human machines’). There are some AI researchers who would welcome this (much of AI is about power and control), but they understand and know so little it is not surprising they believe such nonsense.

  12. Keith Tayler,

    “…reasons why this might not be achievable…”

    The qualifier is fair, but the current state of AI is no definite indicator of its future capability. Nor are past predictions. One might keep making a linear prediction, for example, when in fact the approach to a definite solution is non-linear, so predictions are always off, but there is a general approach to a definite solution.

    But, that then presumes there is some ‘definite’ solution required. It may be sufficient to be ‘close enough’ – after all, human brains differ in various degrees of capability such that we judge some more intelligent than others. We don’t have access to the many ancestor intermediates back to when intelligence went beyond that of the common ancestor with apes. We can’t see how intelligence evolved – whether it is a sort of continuous evolution or something that burst into action beyond a certain point. We might find AI evolves slowly, or we may suddenly hit on a break where we discover the required sort of complexity that suddenly starts thinking its own thoughts in complex feedback that becomes self-aware. These are unknowns; but why presume they are ‘cannot be knowns’?

    That then raises the question about our current understanding of human intelligence. Many philosophical, and theological, notions of intelligence, personhood, identity, mind, and so on, are inherently based on notions of Cartesian dualism, as if there is some magic ‘mind’ power that humans have that machines cannot have. Those that deny dualism and also deny we will ever understand human intelligence tend to invent other terms that amount to a sort of dualist phenomenon, like qualia. If humans are indeed biological machines then there is no clear reason why we might not achieve AI. If naysayers think we are not biological machines they need to explain where the mechanistic continuum from physics, to chemistry, to biology, to brains breaks down, or where and how the magic is inserted.

    “If AI researchers would admit that human intelligence is way beyond their understanding…”

    “they understand and know so little it is not surprising they believe such nonsense”

    So, you understand human intelligence enough to be able to say it is beyond human understanding? While also claiming we don’t understand human intelligence? You claim AI researchers understand so little, but you understand enough to deny the possibility of their success? Do you see the self-contradiction?

    1. We have had machine intelligence (AI if you like) around for thousands of years. The opening passages of Adam Smith’s the Wealth of Nations describes the development of an “intelligent machine”. (It was this passage on the division of labour that inspired Gaspard de Prony to create a method for calculating tables with human computers, which in turn gave Babbage the idea for the method in his ’difference engine’. The rest is history, as they say. AI today is very advanced and in many areas can be said to far outperform human intelligence in these areas. (That said we should, for example, be extremely wary when making comparisons between machine and human memory for they are by no means the same.) AI will continue to advance and become even more ubiquitous and controlling. It might well become the dominate intelligence on the planet (possibly in combination with the “artificial intelligentsia”), but it could achieve this without its intelligence being coextensive with human intelligence or it becoming conscious, etc.. I agree with Weizenbaum that AI will develop an ’alien’ intelligence from ours. Some openly welcome this, Weizenbaum others are less convinced that power and domination should be used as the yardstick of success. I see human intelligence becoming more and more like machine intelligence, perhaps to the point where machines intelligence eliminates what we call thinking (Turing saw no future for thinking). It will not be so alien if we stop thinking about it.

      I am certainly not a Cartesian dualist and do not believe in magic (nor indeed do I believe in the AI myth). I know and understand enough about human intelligence and machine intelligence to be pretty convinced that ‘strong’ AI research (to use Searle’s term) has been a degenerating research programme for the last 60 years and is set to remain so for the foreseeable future. The AI myth would have us believe that machine intelligence will evolve/develop and sudden become conscious and/or become coextensive with human intelligence. I very much doubt it, but I could of course be wrong. Then again I might also be wrong that god(s) do not exist, but surely that does not mean I have to become agnostic. It might be possible to turn base metals into gold, but that does not mean I am prevented from criticising the aims and methodology of an alchemist. Why is strong AI research sacrosanct? As I have said, ‘weak’ AI powers ahead and could become the dominate form of (alien) intelligence. There is no contradiction in my position. I have been criticised by AI believers for holding it for some 50 years; each new generation repeating the same belief that it is only a matter of a few years before the magic happens. I think I am now nearer being proved wrong about god than AI.

Comments are closed.