Skip to content

Artificial Intelligence

Blade Runner 2049, Parfit and Identity

Julian Savulescu

 

Contains spoilers for both Blade Runner films. This is a longer version of a shorter piece without spoilers, Blade Runner 2049: Identity, Humanity, and Discrimination, in Pursuit 

Blade Runner 2049, like the original, is about identity, humanity and discrimination.

Identity and Humanity

In both films, bioengineered humans are known as replicants.  Blade Runners “retire” or kill these replicants when they are a threat to society. In the original, Blade Runner Rick Deckard (Harrison Ford) has all the memories and feelings of a human and believes himself to be a human, only at the end to discover he is a replicant. In the sequel, K (Ryan Gosling) is a replicant but comes to believe (falsely) that he is Deckard’s child. In Blade Runner 2049, we are left to watch K dying, realising his memories were implanted by Deckard’s daughter.

In both films we are left wondering what difference there is between a human and a replicant. In the original, rogue replicant Roy Batty (Rutger Hauer) saves Deckard’s life (as Deckard was trying to kill him) and delivers famous “Tears in the Rain” speech:

“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”

Roy comes across as more human than the humans in the film. Indeed, in a preceding scene, a thorn or spike appears through his hand reminiscent of Christ, whose own identity as fully human and fully divine has puzzled Theologians for two millenia.

Both films challenge what it is to be human. In 2049, K believes the child of Deckard might have a soul because it was born.

Who are we?

The films both raise fundamental questions about personal identity: who are we? What fundamentally defines the existence of a person from one moment to the next? In both films, there is the suggestion that the biological mass, the body, is not what matters but the mind. In the original, bioengineered Roy seems as human as Deckard, as human as someone could be. In 2049, the idea is extended further still: K’s girlfriend Joi is an AI but seems as real as the other characters and her death is equally tragic.

Derek Parfit died in January this year. He was the world’s most famous moral philosopher (and his favourite film was another Ridley Scott classic, The Duellists). One of his famous ideas is that “identity” is not what matters. He articulated this in his masterpiece, Reasons and Persons (Oxford: Clarendon Press, 1984). According to Parfit, what matters is psychological continuity and connectedness, that is, the unity of our mental states.

Read More »Blade Runner 2049, Parfit and Identity

What the Present Debate About Autonomous Weapons is Getting Wrong

Author: Michael Robillard

Many people are deeply worried about the prospect of autonomous weapons systems (AWS). Many of these worries are merely contingent, having to do with issues like unchecked proliferation or potential state abuse. Several philosophers, however, have advanced a stronger claim, arguing that there is, in principle, something morally wrong with the use of AWS independent of these more pragmatic concerns. Some have argued, explicitly or tacitly, that the use of AWS is inherently morally problematic in virtue of a so-called ‘responsibility gap’ that their use necessarily entails.

We can summarise this thesis as follows:

  1. In order to wage war ethically, we must be able to justly hold someone morally responsible for the harms caused in war.
  2. Neither the programmers of an AWS nor its military implementers could justly be held morally responsible for the battlefield harms caused by AWS.
  3. We could not, as a matter of conceptual possibility, hold an AWS itself morally responsible for its actions, including its actions that cause harms in war.
  4. Hence, a morally problematic ‘gap’ in moral responsibility is created, thereby making it impermissible to wage war through the use of AWS.

This thesis is mistaken. This is so for the simple reason that, at the end of the day, the AWS is an agent in the morally relevant sense or it isn’t.

Read More »What the Present Debate About Autonomous Weapons is Getting Wrong

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas

 

The Durham Police force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART has not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper. Here we address one that it is particularly apposite with respect to HART: how should we balance the conflicting moral values at stake in deciding the kind of accuracy we want such tools to prioritise?

Read More »Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Read More »Hide your face?

A jobless world—dystopia or utopia?

There is no telling what machines might be able to do in the not very distant future. It is humbling to realise how wrong we have been in the past at predicting the limits of machine capabilities.

We once thought that it would never be possible for a computer to beat a world champion in chess, a game that was thought to be the expression of the quintessence of human intelligence. We were proven wrong in 1997, when Deep Blue beat Garry Kasparov. Once we came to terms with the idea that computers might be able to beat us at any intellectual game (including Jeopardy!, and more recently, Go), we thought that surely they would be unable to engage in activities where we typically need to use common sense and coordination to physically respond to disordered conditions, as when we drive. Driverless cars are now a reality, with Google trying to commercialise them by 2020.

Machines assist doctors in exploring treatment options, they score tests, plant and pick crops, trade stocks, store and retrieve our documents, process information, and play a crucial role in the manufacturing of almost every product we buy.

As machines become more capable, there are more incentives to replace human workers with computers and robots. Computers do not ask for a decent wage, they do not need rest or sleep, they do not need health benefits, they do not complain about how their superiors treat them, and they do not steal or laze away.

Read More »A jobless world—dystopia or utopia?

Guest Post: Does Humanity Want Computers Making Moral Decisions?

  • by

Albert Barqué-Duran
Department of Psychology
CITY UNIVERSITY LONDON

A runaway trolley is approaching a fork in the tracks. If the trolley is allowed to run on its current track, a work crew of five will be killed. If the driver steers the train down the other branch, a lone worker will be killed. If you were driving this trolley what would you do? What would a computer or robot driving this trolley do? Autonomous systems are coming whether people like it or not. Will they be ethical? Will they be good? And what do we mean by “good”?

Many agree that artificial moral agents are necessary and inevitable. Others say that the idea of artificial moral agents intensifies their distress with cutting edge technology. There is something paradoxical in the idea that one could relieve the anxiety created by sophisticated technology with even more sophisticated technology. A tension exists between the fascination with technology and the anxiety it provokes. This anxiety could be explained by (1) all the usual futurist fears about technology on a trajectory beyond human control and (2) worries about what this technology might reveal about human beings themselves. The question is not what will technology be like in the future, but rather, what will we be like, what are we becoming as we forge increasingly intimate relationships with our machines. What will be the human consequences of attempting to mechanize moral decision-making?

Read More »Guest Post: Does Humanity Want Computers Making Moral Decisions?