Skip to content

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality

By Maximilian Kiener. First published on the Public Ethics Blog

AI, Today and Tomorrow

77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars.  And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.

Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.

There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI.  Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.

AI & The Conditions of Responsibility

First, many scholars argue that a key condition of responsibility is control: one can be responsible for something only if one had meaningful control over it. Yet, AI systems afford very little control to humans. Once in use, AI systems can operate at a speed and level of complexity that make it impossible for humans to intervene. Admittedly, people may be able to decide whether to apply AI in the first place, but once this decision has been made, and justifiably so, there is not much control left. The mere decision to risk a bad outcome, if it is itself justified and not negligent or reckless, may not be sufficient for genuine moral responsibility. Another reason for the lack of control is the increasing autonomy of AI. ‘Autonomy’ here means the ability of AI systems not only to execute tasks independently of immediate human control, but also (via machine learning) to shape the principles and algorithms that govern the operation of these systems; such autonomy significantly disconnects AI from human control and oversight. Lastly, there is also the so-called problem of many hands: a vast number of people are involved in the development and use of AI, and each of them has, at most, only a very marginal degree of control. Hence, insofar as control is required for responsibility, responsibility for the outcome of AI may be lacking.

Second, scholars have argued that responsibility has an epistemic condition: one can be responsible for something only if one could have reasonably foreseen or known what would happen as a result of one’s conduct. But again, AI makes it very difficult to meet this condition. The best AI systems tend to be those that are extremely opaque. We may understand what goes into an AI system as its input data, and also what comes out as either a recommendation or action, but often we cannot understand what happens in between. For instance, deep neural networks can base a single decision on over 20 million parameters, e.g. the image recognition model ‘Inception v3’ developed by Google, which makes it impossible for humans to examine the decision-making process. In addition, AI systems’ ways of processing information and making decisions is becoming increasingly different from human reasoning so that even scrutinising all the steps of a system’s internal working processes wouldn’t necessarily lead to an explanation that seems sensible to a human mind. Finally, AI systems are learning systems and constantly change their algorithms in response to their environment, so that their code is in constant flux, leading to some sort of technological panta rhei. For these reasons, we often cannot understand what an AI will do, why it will do it, and what may happen as a further consequence. And insofar as the epistemic condition of responsibility requires the foreseeability of harm to some degree of specificity, rather than only in very general terms (e.g. that autonomous cars ‘sometimes hit people’), meeting the epistemic condition presents a steep challenge too.

Third, some theorists argue that one is responsible for something when it reflects one’s quality of will, which could be either one’s character, one’s judgment, or one’s regard for others. On this view, control and foresight may not be strictly necessary, but even then, the use of AI poses problems. When an autonomous car hits a pedestrian, for instance, it may well be that the accident does not reflect the will of any human involved. We can imagine a case in which there is no negligence but just bad luck, so that the accident would not reflect poorly on anyone’s character, judgment, or regard for others.

Thus, various approaches to responsibility suggest that no one may be morally responsible for the harm caused by AI. But even if this is correct, a further important question remains: why should we care about a responsibility gap in the first place? What would be so bad about a future without, or with significantly diminished, human responsibility?

 

AI & The Point of Responsibility

To address this question, we need to distinguish between at least two central ideas about responsibility. The first explains responsibility in terms of liability to praise or blame.[1] On some of these views, being responsible for some harm means deserving blame for it. Thus, a responsibility gap would mean that no one could be blamed for the harm caused by AI. But would this be so bad? Of course, people may have the desire to blame and punish someone in the aftermath of harm. In addition, scholars argue that blaming practice can be valuable for us, e.g. by helping us to defend and maintain shared values.[2] Yet, the question remains as to whether, in the various contexts of AI, people’s desire to blame really ought to be satisfied, rather than overcome, and also what value blaming practices ultimately hold in these different contexts. Depending on our answer to these questions, we may conclude that a gap of responsibility in terms of blameworthiness may not be so disadvantageous in some areas, but maybe still of value in others.

The second idea identifies responsibility with answerability, where an answerable person is one who can rightly be asked to provide an explanation of their conduct.[3] Being answerable for something does not imply any liability to blame or praise. It is at most an obligation to explain one’s conduct to (certain) others. Blame would be determined by the quality of one’s answer, e.g. by whether one has a justification or excuse for causing harm. This approach to responsibility features the idea of an actual or hypothetical conversation, based on mutual respect and equality, where the exchanged answers are something that we owe each other as fellow moral agents, citizens, or friends. Here, the question of a responsibility gap arises in a different way and concerns the loss of a moral conversation. Depending on our view on this matter, we may conclude that losing responsibility as answerability could indeed be a serious concern for our moral and social relations, at least in those contexts where moral conversations are important. But in any case, the value and role of answerability may be quite different from the value and role of blame, and thus addressing the challenge of responsibility gaps requires a nuanced approach too.

 

[1] Cf. Pereboom, D. Free will, agency, and meaning in life. Oxford University Press, 2014.

[2] Cf. Franklin, C. Valuing Blame. In Coats, J., Tognazzini, N. (Eds.) Blame: Its Nature and Norms, 207-223. Oxford University Press, 2012.

[3] Smith, A. (2015). Responsibility as Answerability. Inquiry 58(2): 99-126.

Share on

6 Comment on this post

  1. What an interesting question! If I did not believe differently or was plagued with paranoia, I might think somehow, someone read my mind. In my studied opinion, post-responsibility is already here, with or without AI. Virtually (no double entendre intended) no one wants to accept responsibility for anything. Victim culture asserts that, since everyone is out to get you, you bear no fault for anything. If you are ‘busy’, fiddling with your smart phone or other IT gadget, it is everyone else who must look out for you.., regardless of where you are, or where you are going. Learning how to drive a car has been reduced to getting behind the wheel and driving: knowledge and obedience of traffic laws is left to everyone else. As a practical matter, many people already behave like automatons: programmed, but unprepared for circumstance and contingency. AI need not DO anything. Dehumanization proceeds, unassisted and uninformed.

  2. As presented, the situation described reflects the approach taken by many social groups to contentious issues over many decades. But the question in this particular case very clearly illustrates that initially approaching the issue from that direction is frequently wrong, because it masks or reduces the focus on any creators material wishes.
    If it is considered by those creating and promoting AI that they can claim any type of praise for doing so, and expect to be recompensed (probably generously) for their efforts then they also create a form of moral responsibility for themselves. Unless morality is changing beyond any recognition, part of claiming benefits for an outcome creates forms of responsibility for creating that/those benefits. Debating and arguing about the quantity or quality of responsibility for issues masks that aspect which, at a minimum, the individuals themselves have to deal with. Something which cultures attempt to deal with for their members by the ethical structures they attempt to create to promote expansion of their own knowledge base. And questioning responsibility at the level of a culture frequently promotes much ideological resistance. As mentioned in the article a major apparently paradoxical difficulty hidden amongst all of this is the control of intelligence and knowledge which many individuals and social groups continually attempt to direct in a controlled way for their own benefit, often at the same time creating a layer of protection enabling a denial of direct responsibility whilst accepting any apparent gains.

  3. Harrison Ainsworth

    Grant *everything* moral status (humans, bots, plants, machines, etc), purely by virtue of their interaction with others, and then moral decisions become a calculation over some model of the whole causal network ‒ something like ‘eigenmorality’ (search for that and read the article).

    Morality is characteristically and essentially a multi-agent calculation. This basically falls into two: deciding the overall direction (specifying the ‘good’), and coordinating differing agents (formulating the rules). Deciding what is wanted/good is harder, but the other half, coordination, is not obscure, it is just an optimisation.

    Morality does not require ‘consciousness’ or ‘sentience’ or ‘intention’ or any of such common philosophical impedimenta. They are only misleading anthropocentrisms. You mostly just need models ‒ on a domain of multiple agents ‒ of what things are likely to happen, and how things might be modifiable.

    (Also, check out Peter Railton ‒ he seems the only philosopher who gets this.)

  4. Sorry—no, I am not sorry—I don’t buy it. I don’t have any beef with Railton or anyone else. But, I do not agree with morality as a machine trait. There are some ideas/theses which I can’t agree with period

  5. Surely the mechanisms of morality/ethics should not be denied be it a strictly logical mechanism, emotional one or any mixture. They do result in moralities containing ethical frameworks which to some extent reflect the needs of their participants (I do not deny the ability to exercise free choice and the realistic availability of other options influences that situation). And yes everything can have a moral status but in a hierarchical conception the status becomes quantified during ethical reflection producing logical constructs leading to further categorisation/prioritisation and structural objectification.

    For the purpose of this blog article two past issues will cast some illumination:
    The plastic problem; In the 1950’s 1960’s when plastics were first being utilised an amount of discussion was put forward identifying the lifespan of that material and how disposal and resulting pollution would become a major difficulty. Those points made in those discussions were largely ignored, albeit a few attempts were made to reduce the lifespan of some plastics, but for whatever reasons methods to achieve that were not widely implemented. Many will say look at the benefits accrued by humanity during that time…
    The ring pull for tin cans problem;
    Discarded ring pulls for tin cans became a very real problem as the growth of canned drinks increased. Issues of pollution and the damage they were causing were highlighted resulting in research attempting to resolve the difficulty which was harming farming. A resolution was found by keeping the ring pull attached to the can, and that was widely implemented, largely resolving what had reportedly become a very real problem to farmed animals.

    Those issues are not dissimilar in that produced items caused environmental difficulties which eventually affected humanity. One was not really resolved, and is now a major problem. The other was largely resolved.

    Now look to AI and see that what the discoverers/developers/inventors/parents have is an immediate and very fast feedback loop (if the deep thinkers on this subject are to be believed) which they may not be able to keep up with and is likely to bite them hard if they get it wrong. Reportedly there will apparently be little time lag before it potentially ‘bites back’ (much faster than ring pulls) and ostensibly the only way a degree of delay can be built in (as in the attempts with plastics) is to tightly control it in the early stages. And yet, true AI will not be some stable commodity allowing a ring pull answer or delayed response to be found (or not) because intelligent life is different. Ethical algorithms are mostly strictly logical and frequently become rigid, so a more flexible moral mechanism will undoubtedly be sought which would continue to reflect the concerns of the human element in the intelligence equation even when at times the AI could not be fully comprehended by its creators. Right first time has become a necessarily real element in this. And with humanities history being rather short of successful examples in truly respecting different intelligences/moralities the advantages perceived to be delivered by a rapid delivery is more likely to be the main driving force. Personally that does not seem to be Post-Responsibility to me, more a blind eye, situation normal, and deal with any problems later. Because for the driving mindsets problems reveal opportunities.

  6. I’ll cut the chase, gentlemen. We are already there., like it or not. Interests, preferences and motives have led to some ludicrous propositions and conclusions. Insofar as mine ( interests,etc.), do not matter, neither does it matter what I claim. If anyone chooses to believe artificial intelligence is, can or will be capable of moral or ethical behavior, that is his/her choice, axiologically or deontologically. Given the confusion now over ethics and morality, it really does not make much difference. What will likely happen—is happening—is that those facets of human behavior will be re-defined to conform with changing paradigms. Reality will be whatever the hell we say it is. Postmodernism has already started that snowboulder. I have also criticized cross-disciplinarian obfuscations and misdirection. So far, only a few have allied themselves with that criticism. Mainly due to their lack of interests, preferences and motives that depend, at least in part, upon such alliances. My job is done here. Yours is just beginning. Good luck.

Comments are closed.