Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.
Artificial Intelligence and Human Decision-making.
Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.
One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.
While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)
For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0” next month as part of the Philosophy and Theory of AI conference).
Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.
Moral Outsourcing
At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.
Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.
We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.
So Why Shouldn’t I Trust The Machine?
So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”
It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.
So why not trust the machine?
Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.
This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).
Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.
Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.
1. Transcript available here.
For further discussion on Brooks’s talk, see Foreign Policy Magazine articles here and here.
Sponsoring student adventures, with added charity.
Last weekend’s work was greatly enlivened for me by keeping track of the updates for Trinity Jailbreak. The challenge: 37 student teams had 36 hours to get as far from Trinity College Dublin as possible, without spending any money. They were, however, allowed to blag, persuade, and get corporate sponsorship to aid their “getaways”. Pre-contest, Donegal, Kerry, or perhaps Calais at a stretch seemed like likely winning destinations. However, this was grossly underestimating the resourcefulness of these students – with 1 hour to go, two teams were separated by less than 10km, frantically running in opposite directions – one in Indonesia, the other in Argentina, both >11,000 km from home.
From Trinity College Dublin’s website: “Medical students, Claire and Matthew were named winners of the competition on Monday when they reached the sunny Atlantic coastal city of Mirimar, south of Buenos Aires. They managed to fly to Buenos Aires and take a taxi down the Argentine coast without spending any of their own money and without speaking Spanish. Musician Chris de Burgh stepped in to pay for their ticket home… Many of the students persuaded travel agencies to sponsor them and made it to Paris, the Vatican City and Warsaw during the event… Lydia Rahill of the Trinity Law Society expressed gratitude to all who supported the event through sponsorship and offers of food, accommodation and help with travel expenses.”
The Challenge was organised on behalf of charity St. Vincent de Paul, which aims to fight poverty in Ireland, as well as Amnesty International, and had an original goal of raising €4,000 (~£3450). However, it caught the public imagination, and €10,000 had been raised by Monday, with €15,000 expected as a final tally. It’s apparently even made it into Time Magazine.
“Very impressed with @TCDJailbreak. Brilliant way to raise money for charity and amazing to see how far you can get by just blagging” tweets one Irish celebrity.
A great success!
Or was it?
I’m taking anyone’s best guess at how much all of these flights, accommodation, food and so on for the 37 teams would have added up to. It would take an age to find out, but it seems that it cost at least one team well over €4,000 (from Twitter: “@RoyalBruneiAir and Dermot Mannion are unbelievably generous, sponsored a @TCDJailbreak team w/ return flights to the tune of over €2000 each”). Now, only a few teams got as far as Indonesia, Brunei, Sydney (neither Brunei nor Sydney won due to missing the deadline) and Argentina – most stalled in Europe and some didn’t make it past Ireland. But it’s a near certainty that the overall cost of sponsoring this event is more than the expected €15,000 raised – I suspect a lot, lot more.
Which means the bottom line is: €15,000 to be raised for charity, >>€15,000 raised to sponsor “adventures for students”.
There are a lot of positives about this challenge – it encouraged a great deal of resourcefulness in those taking part, and showed quite how far you can get from a starting position of very little. But as a charity event? Can we say ‘the bottom line is €15,000 is going to good causes that wouldn’t have otherwise’? Or is this an example of a hugely wasteful resource-gobbling ‘charity challenge’ with only a small fraction of the funds making it to the intended charities?
– Perhaps this just counts as ‘good advertising’ for the airlines and travel agencies that sponsored it, and that this money would have gone into more television advertisement otherwise.
– Perhaps the personal individuals who covered flights, food and accommodation just like sending students on holiday, and wouldn’t have dreamed of sending this money directly to a charitable organisation if this event hadn’t taken place.
However, the public landscape is littered with good causes which need our support. And when we, or celebrities, or corporations support one cause, it means resources that can’t be allocated to another. When it comes to individuals, some literature suggests as well that biases such as “purchase of moral satisfaction” (Kahneman and Knetsch 1992) mean that people will spend just enough on a ’good cause’ to have that warm, fuzzy feeling – it may not matter how much actual good that donation does in the world; it would seem to follow that if a better target comes along they will not be inclined to support it. So a wasteful charity stunt may hurt other charitable ventures by diverting resources in inefficient directions*.
A contrasting charitable challenge taking place shortly is Live Below The Line. People challenge themselves to live for £1 or less a day for five days, raising money while highlighting how severe the poverty line is. And it doesn’t cost a huge amount to put on. If anything, those undertaking the challenge save money.
It’s hard for me to say with certainty without a lot more background work whether the ultimate charitable contribution of “Jailbreak” has a plus or minus sign attached – a growing number of organisations such as Giving What We Can focusing on these types of considerations could give a much more thorough and careful analysis. However, these types of event are unlikely to represent the best way to do the most amount of good.
If you want to climb Kilimanjaro, climb Kilimanjaro. If you want to raise money for charity, find a challenge that doesn’t cost large amounts of money and get sponsored for that. Then give the charity your Kilimanjaro travel money.
Responsible charities need to move away from association with costly, resource-gobbling stunts.
(Although in fairness, Argentina? That’s some pretty good blagging.)
*It has also been pointed out that this event had a dinosaur-sized carbon footprint that is perhaps worth factoring in, but further discussion on issues such as these is outside the scope of this blog post.
D. Kahneman and J. L. Knetsch, Valuing public goods: The purchase of moral satisfaction, J. Environ. Econom. Manage., 22, 57-70 (1992).
Recent Comments