An Old Bore writes:
Last week I got the boat from Athens to Hydra. It takes about 2 ½ hours, and takes you along the coast of the Argolid.
The sun shone, the dolphins leapt, the retsina flowed, the bouzoukis trembled, and we watched the sun rise over the Peloponnese. It was wonderful. At least it was for me.
Basking on the upper deck, playing Russian roulette with malignant melanoma, were four girls, all aged around 15. They saw nothing. They stretched out on bean bags, their eyes shut throughout the voyage. They heard nothing other than what was being pumped into their ears from their IPods. They would no doubt describe themselves as friends, but they didn’t utter a word to each other. They shared nothing at all apart from their fashion sense and, no doubt, some of the music. The dolphins leapt unremarked upon. We might, so far as the girls were concerned, have been cruising past Manchester rather than Mycenae. Continue reading
Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.
Artificial Intelligence and Human Decision-making.
Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.
One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.
While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)
For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0″ next month as part of the Philosophy and Theory of AI conference).
Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.
At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.
Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.
We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.
So Why Shouldn’t I Trust The Machine?
So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”
It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.
So why not trust the machine?
Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.
This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).
Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.
Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.
The government is currently consulting on whether the maximum sentences for aggravated offences under the Dangerous Dogs Act 1991 should be increased. This offence category covers cases in which someone allows a dog to be dangerously out of control and the dog injures or kills a person or an assistance dog. Respondents to the survey can indicate whether they want tougher penalties for these sorts of cases. The suggested range of penalties for injury to a person – as well as death or injury of a guide dog – are three, five, seven or 10 years in prison. In relation to cases involving the death of a person, the respondent is asked: “Which of the following options most closely resembles the appropriate maximum penalty: seven years, 10 years, 14 years or life imprisonment?”
Given that the current maximum sentence for cases involving death is two years in prison, changing the law to match any of these options would represent a significant increase in the severity of the sanction. Whilst the current two-year maximum has understandably struck many as too low, it is important that those responding to the consultation — and those revising the law it is intended to inform — think carefully about the principles that would justify an increase. Continue reading
Not all ethical issues are equally important. Many ethicists spend their professional lives performing in sideshows.
However entertaining the sideshow, sideshow performers do not deserve the same recognition or remuneration as those performing on our philosophical Broadways.
What really matters now is not the nuance of our approach to mitochondrial manipulation for glycogen storage diseases, or yet another set of footnotes to footnotes to footnotes in the debate about the naturalistic fallacy. It is: (a) Whether or not we should be allowed to destroy our planet (and if not, how to stop it happening); and (b) Whether or not it is fine to allow 20,000 children in the developing world to die daily of hunger and entirely avoidable disease (and if not, how to stop it happening). My concern in this post is mainly with (a). A habitable planet is a prerequisite for all the rest of our ethical cogitation. If we can’t live here at all, it’s pointless trying to draft the small print of living. Continue reading
Last week, Canadian researchers published a study showing that some modern slot machines ‘trick’ players – by way of their physiology – into feeling like they are winning when in fact they are losing. The researchers describe the phenomenon of ‘losses disguised as wins’, in which net losses involving some winning lines are experienced in the same way as net wins due to physiological responses to the accompanying sounds and lights. The obvious worry is that players who are tricked into thinking they’re winning will keep playing longer and motivate them to come back to try again.
The game set up is as follows: players bet on 15 lines simultaneously, any of which they might win or lose. A player will accrue a net profit if the total amount collected from all winning lines is greater than the total amount wagered on all 15 lines. Such an outcome is accompanied by lights and sounds announcing the wins. However, lights and sounds will also be played if any of the lines win, even if the net amount collected is less than the total amount wagered on all 15 lines. If a player bets 5 credits per line (5 x 15 = 75) and wins 10 back from 3 (= 30), then the player has actually lost money, even though the lights and sounds indicate winning. The loss, the researchers claim, is thus disguised as a win. Continue reading
Dale and Leilani Neumann are Pentecostal Christians. Their 11 year old daughter, Kara, fell ill. In fact she had (undiagnosed) diabetes. Her parents refused to obtain medical help. Instead they prayed.
‘Kara’s father testified that death was never on their minds. He testified that he knew Kara was sick but was “never to the alarm of death,” and even after she died, her father thought that Jesus would bring Kara back from the dead, as he did with Lazarus.
The parents and friends testified that the parents took tangible steps to help Kara. The mother tried to feed Kara soup and water with a syringe, but the liquid just dribbled out of Kara’s mouth. The father tried to sit Kara up, but she was unable to hold herself up. At some point, Kara involuntarily urinated on herself while lying unresponsive on the couch, so they carried her upstairs and gave her a quick sponge bath while she lay on the bathroom floor.
At one point, Kara’s maternal grandfather suggested by telephone that they give Kara Pedialyte, a nutritional supplement, in order to maintain the nutrients in her body. The mother responded that giving Kara Pedialyte would be taking away the glory from God. Kara’s mother had told another visiting friend that she believed that Kara was under “spiritual attack.”
Friends Althea and Randall Wormgoor testified that they arrived at the Neumanns’ home on Sunday at approximately 1:30 p.m. The Wormgoors saw that Kara was extremely ill and nonresponsive. Her eyes were partially open but they believed she needed immediate medical attention. Randall Wormgoor pulled Kara’s father aside and told him that if it was his daughter, he would take her to the hospital. The father responded that the idea had crossed his mind, and he had suggested it to his wife, but she believed Kara’s illness was a test of faith for their family and that the Lord would heal Kara….’ 
But the Lord did not. Or at least not physically. Kara died from diabetic ketoacidosis. The evidence was that, with conventional medical care, she would have lived. Continue reading
Over about 14 months, Harry Kakavas lost $20.5 million in a casino in Melbourne. It could have been worse. He put about $1.5 billion on the table. He sued the casino. It knew or should have known, he said, that he was a pathological gambler. It shouldn’t have continued to take his money. It should have protected him from himself. Nonsense, said the High Court of Australia.
‘Even if, contrary to the findings of the primary judge, the appellant did suffer from a psychological impairment, the issue here is whether, in all the circumstances of the relationship between the appellant and Crown, it was sufficiently evident to Crown that the appellant was so beset by that difficulty that he was unable to make worthwhile decisions in his own interests while gambling at Crown’s casino. On the findings of fact made by the primary judge as to the course of dealings between the parties, the appellant did not show that his gambling losses were the product of the exploitation of a disability, special to the appellant, which was evident to Crown.
Equitable intervention to deprive a party of the benefit of its bargain on the basis that it was procured by unfair exploitation of the weakness of the other party requires proof of a predatory state of mind. Heedlessness of, or indifference to, the best interests of the other party is not sufficient for this purpose. The principle is not engaged by mere inadvertence, or even indifference, to the circumstances of the other party to an arm’s length commercial transaction. Inadvertence, or indifference, falls short of the victimisation or exploitation with which the principle is concerned.‘ (paras 160-161 of the judgment).
So it all turned on findings of fact (it wasn’t ‘sufficiently evident’ that his losses were the result of a disability, and if they were, they weren’t the product of a disability ‘special to the appellant.’)
That last criterion is interesting. The court seems to be implying that everyone who puts themselves in the position of losing large amounts of money in a casino is necessarily not quite right in the head. To establish liability you need a degree of vulnerability over and above that possessed by the ordinary punter. By accepting the trial judge’s finding that Kakavas did not suffer from a ‘psychological impairment’, the court was presumably saying: ‘Right: so Kakavas is weak and easily exploited: but that’s true of everyone who walks through the door, buys some chips and sits down at the table. That sort of weakness is within the general bell curve of human flabbiness. But Kakavas wasn’t particularly, dramatically, visibly weak.’ Continue reading
By Charles Foster and Jonathan Herring
Scene 1: An Intensive Care Unit
Like many patients in ICU, X is incapacitous. He also needs a lot of care. Much of that care involves needles. Late at night, tired and harassed, Nurse Y is trying to give X an intravenous injection. As happens very commonly, she sticks herself with the needle.
Nurse Y is worried sick. Perhaps she will catch HIV, hepatitis, or some other serious blood-borne infection? She goes tearfully to the Consultant in charge.
‘Don’t worry’, he says. ‘We’ll start you on the regular post-exposure prophylaxis. But to be even safer, we’ll test some of X’s blood for the common infections. I doubt he’ll be positive, but if he is, we’ll start you straight away on the necessary treatment. We needn’t take any more blood: there are plenty of samples already available.’
A sample of blood is submitted for analysis. Continue reading
I have just watched someone die. Just one person. But a whole ecosystem has been destroyed. Everyone’s roots wind round everyone else’s. Rip up one person, and everyone else is compromised, whether they know it or not. This is true, too, for everything that is done to anyone. Death just points up, unavoidably, what is always the case.
This is trite. But it finds little place in bioethical or medico-legal talk. There, a human is a discrete bio-economic unit, and there’s a convention that one can speak meaningfully about its elimination without real reference to other units.
In some medico-legal contexts this is perhaps inevitable. There have to be some limits on doctors’ liability. Hence some notion of the doctor-patient relationship is probably inescapable, and the notion requires an artificially atomistic model of a patient.
But ethics can and should do better. Continue reading