Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality
By Maximilian Kiener. First published on the Public Ethics Blog
AI, Today and Tomorrow
77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars. And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.
Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.
There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI. Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.