Artificial Intelligence

What the Present Debate About Autonomous Weapons is Getting Wrong

Author: Michael Robillard

Many people are deeply worried about the prospect of autonomous weapons systems (AWS). Many of these worries are merely contingent, having to do with issues like unchecked proliferation or potential state abuse. Several philosophers, however, have advanced a stronger claim, arguing that there is, in principle, something morally wrong with the use of AWS independent of these more pragmatic concerns. Some have argued, explicitly or tacitly, that the use of AWS is inherently morally problematic in virtue of a so-called ‘responsibility gap’ that their use necessarily entails.

We can summarise this thesis as follows:

  1. In order to wage war ethically, we must be able to justly hold someone morally responsible for the harms caused in war.
  2. Neither the programmers of an AWS nor its military implementers could justly be held morally responsible for the battlefield harms caused by AWS.
  3. We could not, as a matter of conceptual possibility, hold an AWS itself morally responsible for its actions, including its actions that cause harms in war.
  4. Hence, a morally problematic ‘gap’ in moral responsibility is created, thereby making it impermissible to wage war through the use of AWS.

This thesis is mistaken. This is so for the simple reason that, at the end of the day, the AWS is an agent in the morally relevant sense or it isn’t.

Continue reading

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas

 

The Durham Police force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART has not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper. Here we address one that it is particularly apposite with respect to HART: how should we balance the conflicting moral values at stake in deciding the kind of accuracy we want such tools to prioritise?

Continue reading

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Continue reading

A jobless world—dystopia or utopia?

There is no telling what machines might be able to do in the not very distant future. It is humbling to realise how wrong we have been in the past at predicting the limits of machine capabilities.

We once thought that it would never be possible for a computer to beat a world champion in chess, a game that was thought to be the expression of the quintessence of human intelligence. We were proven wrong in 1997, when Deep Blue beat Garry Kasparov. Once we came to terms with the idea that computers might be able to beat us at any intellectual game (including Jeopardy!, and more recently, Go), we thought that surely they would be unable to engage in activities where we typically need to use common sense and coordination to physically respond to disordered conditions, as when we drive. Driverless cars are now a reality, with Google trying to commercialise them by 2020.

Machines assist doctors in exploring treatment options, they score tests, plant and pick crops, trade stocks, store and retrieve our documents, process information, and play a crucial role in the manufacturing of almost every product we buy.

As machines become more capable, there are more incentives to replace human workers with computers and robots. Computers do not ask for a decent wage, they do not need rest or sleep, they do not need health benefits, they do not complain about how their superiors treat them, and they do not steal or laze away.

Continue reading

Guest Post: Does Humanity Want Computers Making Moral Decisions?

Albert Barqué-Duran
Department of Psychology
CITY UNIVERSITY LONDON

A runaway trolley is approaching a fork in the tracks. If the trolley is allowed to run on its current track, a work crew of five will be killed. If the driver steers the train down the other branch, a lone worker will be killed. If you were driving this trolley what would you do? What would a computer or robot driving this trolley do? Autonomous systems are coming whether people like it or not. Will they be ethical? Will they be good? And what do we mean by “good”?

Many agree that artificial moral agents are necessary and inevitable. Others say that the idea of artificial moral agents intensifies their distress with cutting edge technology. There is something paradoxical in the idea that one could relieve the anxiety created by sophisticated technology with even more sophisticated technology. A tension exists between the fascination with technology and the anxiety it provokes. This anxiety could be explained by (1) all the usual futurist fears about technology on a trajectory beyond human control and (2) worries about what this technology might reveal about human beings themselves. The question is not what will technology be like in the future, but rather, what will we be like, what are we becoming as we forge increasingly intimate relationships with our machines. What will be the human consequences of attempting to mechanize moral decision-making?

Continue reading

Authors

Subscribe Via Email

Affiliations