Skip to content

The ABC of Responsible AI

Written by Maximilian Kiener

 

Amazon’s Alexa recently told a ten-year-old girl to touch a live plug with a penny, encouraging the girl to do what could potentially lead to severe burns or even the loss of an entire limb.[1] Fortunately, the girl’s mother heard Alexa’s suggestion, intervened, and made sure her daughter stayed safe.

But what if the girl had been hurt? Who would have been responsible: Amazon for creating Alexa, the parents for not watching their daughter, or the licensing authorities for allowing Alexa to enter the market?

The Question of Responsibility

Such questions of responsibility are very difficult to answer. When we use new technologies, such as AI, we may know, in principle, that Alexa can come up with bad ideas, that autonomous cars can hit pedestrians, and that medical AI can misdiagnose patients. But we cannot know when exactly these things happen, why they happen, and how to guard against them. Often, we’re simply unable to avert harm. So, on what grounds should we then be responsible?

 

Responsibility and Sustainability

Unfortunately, we cannot just set aside the questions of responsibility. We need clarity, or otherwise we fail to use of AI sustainably. The general idea about sustainability is this: it is about how we can satisfy the needs that people have today without frustrating the needs that will exist tomorrow. Often, the environment comes to mind: environmental sustainability asks how we can use natural resources so that our planet also remains a habitable place for future generations.

But sustainability has a social dimension too: social sustainability asks how we can create societal structures that not only satisfy our interests today but also help people to live a good life, respect each other, and enjoy peace tomorrow. The greatest threats to social sustainability are polarisation, discrimination, inequality, and injustice. And by now, we know that some of the latest technologies have the tendency to create or at least exacerbate these threats.

This is where responsibility comes in. The core idea about responsibility lies in its very name: response-ability is about giving responses or answers, and it concerns the conditions under which people are obliged to do so. A responsible person is one who can be asked to answer, explain, or justify their conduct to others. If you’re responsible in this sense, it may still be inappropriate to blame or punish you, at least when you have a good explanation. But in any case, you owe it to others that you give that explanation and potentially also apologise.

This type of responsibility is like a conversation. It brings together those who caused harm and those who suffered harm. What is more, it requires that people commit to certain moral values: to respect each other as moral equals in that conversation, mutually to recognise their standpoints and interests, and maybe even to make amends in the end.

But when responsibility brings people into conversation like that, it also protects against the threats to social sustainability. Responsibility demands equality and thus opposes discrimination. Responsibility demands mutual respect and thus opposes injustice. And responsibility demands conversation and thus opposes polarisation.

This is not to say the responsibility, as answerability, is the magic cure against threats to social sustainability. There is more we need to do, for sure. But it shows that responsibility can play an important role in ensuring the socially sustainable use of new technologies. And for this reason, we should not take the topic lightly.

 

The ABC of Responsible Use of Technology

But what can we actually do about it? To make a start, I suggest that we publicly discuss and determine three aspects of responsibility in the use of new technologies:

Answerability: ahead of time, we need to determine who is under an obligation to explain what and to whom. For instance, what exactly is it about products like Alexa that Amazon ought to be able to explain to the victims of harm or society at large?

Blame: we need to determine what kind of answer we expect from people, and under what conditions those answerable should face blame or legal punishment. What are the standards that developers and users of new technologies ought to live up to, and who bears the burden of proof?

Compensation: finally, we need to clarify who pays when things go wrong, since new technologies not only generate great benefits but also impose significant burdens and harms.

 

[1] https://www.bbc.com/news/technology-59810383

 

Share on

2 Comment on this post

  1. There has been much discussion on responsibility lately. Creators of tools and platforms do not accept any, as much as saying that if users of their products/services cannot use them wisely, too bad. They have a point—no one is forcing anyone to participate in social media. Yet, the parable of the child and the penny deserves scrutiny. Just as the growing issue of children bringing guns to school. Someone needs to bear responsibility for that. Probably not the firearms manufacturers: they don’t force people to buy their products. That said, there must be some responsibility somewhere. I can’t place blame on any single influence, but it seems that social change began to accelerate with postmodernism. I also sense a loss of attention to detail attending the fast-paced world we have been experiencing for a score of years. Ads for job openings are signals, with their descriptions of ‘s fast-paced, high energy workplace’. How fast-paced and high energy does a workplace have to be? Responsibility is not contingent upon willingness to accept it. It is a requirement, borne by rational human beings.

  2. Responsibility and any accounting of that inevitably lead to regulation and legislation be that before any harms or after harms become sufficiently great (in numbers or immediate damage). Yet looking to history and all fails when it comes to responsibility for the future. Consider the current global problems with pollution, (where any original complainers about a new pollution risk allied to new commerce/technology were alienated from many communities by the actions of those responsible, who later on were not held accountable.) Currently, acceptable levels of responsibility may be exemplified by the bedlam at Number 10 and partygate, where the leading peer of the land provides a prime example of the difficulties in defining the meaning of the word responsibility, and by attempts to weaken the weight of accountability by procrastination and leveraging of other concerns redefine a set of rules.
    So what additional damage is caused when the accountability of computerized systems merely reflects, or is reflected, by those human social situations where the immediacy of the moment are given precedence over future difficulties? Progress may be slowed if future damages are prioritized too highly, so time and forgiveness considerations become fraught with difficulties as those longer term issues become mixed with tribalistic meanings (whether those be political, financial or moral – look to Ukraine for another prime example). So is a true judging of most things determined more by the time frame used within any reflective deliberation, and is an adjusted and singularly restrictive time frame the be all and end all of todays politics, its responsibilities, and regulative outcomes? In most circumstances the slowing down of technical advance by regulative accountability becomes a risk factor where considerations of financial return always win and the onus is placed upon people to protect themselves. Sadly the knowledge to achieve such protection in a technological world is not socially achievable without ignoring many other probably more important factors. Applying a limited time frame by using pragmatic approaches does lead to the question of what immediately perceivable costs applied for damages would sufficiently rebalance that equation. But does it answer the more demanding question? Would such regulation and accountability of artificial intelligence result in the acceptability/necessity of the application of the same controls to the human world, leading to more tightly regulated and monitored education, learning and thinking processes?

Comments are closed.