Skip to content

Max Kiener

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality

By Maximilian Kiener. First published on the Public Ethics Blog

AI, Today and Tomorrow

77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars.  And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.

Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.

There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI.  Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.

Read More »Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality

Peter Railton’s Uehiro Lectures 2022

Written by Maximilian Kiener

Professor Peter Railton, from the University of Michigan, delivered the 2022 Uehiro Lectures in Practical Ethics. In a series of three consecutive presentations entitled ‘Ethics and Artificial Intelligence’ Railton focused on what has become one the major areas in contemporary philosophy: the challenge of how to understand, interact with, and regulate AI.

Railton’s primary concern is not the ‘superintelligence’ that could vastly outperform humans and, as some have suggested, threaten human existence as a whole. Rather, Railton focuses on what we are already confronted with today, namely partially intelligent systems that increasingly execute a variety of tasks, from powering autonomous cars to assisting medical diagnostics, algorithmic decision-making, and more.Read More »Peter Railton’s Uehiro Lectures 2022

The ABC of Responsible AI

Written by Maximilian Kiener

 

Amazon’s Alexa recently told a ten-year-old girl to touch a live plug with a penny, encouraging the girl to do what could potentially lead to severe burns or even the loss of an entire limb.[1] Fortunately, the girl’s mother heard Alexa’s suggestion, intervened, and made sure her daughter stayed safe.

But what if the girl had been hurt? Who would have been responsible: Amazon for creating Alexa, the parents for not watching their daughter, or the licensing authorities for allowing Alexa to enter the market?

Read More »The ABC of Responsible AI