Skip to content

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

 

Much like the thousands of signatories of the March open call to “pause giant AI experiments”, and recently the “Godfather of AI” Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

 

Indeed, there’s a bit of a public buzz about “AI ethics” in recent months.

 

While it is good that there is a general awareness and a public discussion about AI – or any majorly disruptive phenomenon for that matter – there’s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs – it’s not “one big thing”[i] – and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

 

So let us, for argument’s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as I’m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

 

In C3, it seems that the outcome of the visitor’s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say ‘stay clear of the door’, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3’s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human who’s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we don’t believe that they did anything morally wrong – they just malfunctioned.

 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that “It is not about AI, it is about humans”: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this – not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

 

A couple of things pop to mind here.

 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

 

Second, it seems to me that, if the door keeps slamming into visitors’ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system – but not for whacking people in the face with a door (because they didn’t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agents’ ability to whack faces.

 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

 

I just need to know whether or not I need to stay clear of the door.

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoy’s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Let’s give them the benefit of the doubt here, and assume it wasn’t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

Share on

4 Comment on this post

  1. Paul D. Van Pelt

    Adopting a rational positivist stance, I will contend that you are right and, further, that Conditions 1 & 2 are of no worrisome consequence. Condition 3 is not especially concerning either, if and only if, we can comfortably assume that AI will only be created and updated to do the things we want it to do. I do not know, but can imagine AI protagonists, generally, are thinking and operating on such a ground assumption. Others, thinking along the line of an AI/transhumanist connection, may be anticipating and advocating far more. But there is that ‘if, and only if’ disclaimer. Inventors, researchers and developers like to tinker with creations; push-the-envelope; “let’s see what this baby can do!”. How hazardous are errors—even unforced ones? Professional tennis players can speak to that, though they are loathe to do so. So, yes, your points are well-taken. There are three sides to every story: yours, mine and the one neither of us counted on. We don’t know what we don’t know until the shit hits the fan or the spacecraft malfunctions.

  2. The article covers perfectly conditioned and expected input/response situations. The response by PDVP mentions unforced responses (players in tennis games), but neither appear to consider the maliciously caused response, which sadly in human society often also requires an awareness.
    Whilst it is always acknowledged that differences between moral and legal perspectives raise many questions of what actually allows/defines truly moral action, purely illegal actions are often missed because of that regulated focus, even though in some circumstances the actions may be truly moral.
    Using a similar example to the automated voices cautioning care (which often also provide some legal protection against liability in accidents) to illustrate a different situation, such as the common burglar alarms. During a time when the cost of simple alarms reduced considerably, their widespread use disrupted burglars in their activity. Human ingenuity soon came to the fore and alarms were being deliberately activated by burglars several times over several days or weeks until it became a normal thing for an alarm to activate at that time and the activation was either ignored or given a lower priority, at which time any worked towards goal was achieved. Do people always continue to hear a repetitive voice of caution, or, like advertising, does it blend into the background and become overlooked. Is that ignoring/overlooking the rule also something which also happens when morality becomes subsumed by a fixed and stable rule which is automatically enforced sufficient to alleviate the moral dilemma associated with a particular situation? What would be the real purpose of the door with its access facilities and any validating recording device which may also in today’s world cover that entrance/exit. Is it to create private space for work and contemplation, or a secure controlled space facilitating a representation of ideas.
    Secondly, and in my view a more telling area of consideration would be the development pathway of doors and their control mechanisms. Taking rotating doors as an example and the thought processes associated with the ongoing development of that mechanism take a clearly different route, arguably expressive/creative of different and more considerate worldviews than many of the ordinary door access control mechanisms, driven by different physical parameters influencing different mental approaches yet softly enforcing entrance and exit measures. Not knowing of the centres door and control mechanisms history, only its continuing use may be considered when looking to the cultural impact in the minds of those entering. Repetition appears to beget the normal which becomes expressed in many ways even using new technology.
    This type of development is currently ongoing with AI and has become simply illustrative of where the main crux of the issues generally raised which form discussions around the periphery come from by often herding ideas towards particular worldviews using mechanisms which promote their own representation, expectations or abilities.

  3. This commentary is off topic and may never be read, even if it is published here. That prefaced, there was a commentary today regarding Oxford University’s role in the twentieth century; whether Oxford *dominated philosophy*, during those yesteryears. The originator of the blog went to some pains to illustrate academic affiliations of a couple of dozen thinkers, some I have read, many I have not. The thinker’s conclusions were far from conclusive, seemed to me. I need not name names, nor acknowledge the originator. Or his blog. I know little about him, beyond allegations he is not universally liked. Those, and a run-in or two that demonstrated, clearly, he and I do not see well together. I will say this much. Philosophy is older than almost any other human discipline. Possibly, tied with science, and/or religion. I have argued one or another, over the others. Everyone who bothers to think, also holds interests, motives and preferences. IMPs. Those are influenced, interestingly enough, by Davidson’s notion of propositional attitudes, from which I derived the IMP notion. Frankly, is is ludicrous to me who dominated philosophy in the twentieth century. My elder brother has said it best: we can’t cooperate, while competition runs the show. Exactly.

  4. This year is nearly over. I think many of us will not sadly watch it depart, preferring to stay clear, as best possible. I read something today on the atrocities of another dark time in America: McCarthyism. The parallels between what was happening then and now in America are disturbing; the damage being inflicted upon society and democracy, appalling. In my comments on the story told, I talked about growing up in the 1950s and 60s. During my youth, communism was not discussed much in polite conversation…not in my junior high and high school, anyway. Teachers mostly kept their mouths shut about anything beyond the accepted curriculum. Understandably so. Now, it seems to me, the threats to democracy, real or perceived, are from within, and originate not from a communist threat, but from dissidence over what form democracy should take. Interests, motives and preferences (IMPs) are as divergent as they can be. I need not say anything on the politics of it. I don’t know what more to say on any of this, other than stay clear of the door.

Comments are closed.