Skip to content

Guest Post: Dear Robots, We Are Sorry

  • by

Written by Stephen Milford, PhD

Institute for Biomedical Ethics, Basel University

 

The rise of AI presents humanity with an interesting prospect: a companion species. Ever since our last hominid cousins went extinct from the island of Flores almost 12,000 years ago, homo Sapiens have been alone in the world.[i] AI, true AI, offers us the unique opportunity to regain what was lost to us. Ultimately, this is what has captured our imagination and drives our research forward. Make no mistake, our intentions with AI are clear: artificial general intelligence (AGI). A being that is like us, a personal being (whatever person may mean).

If any of us are in any doubt about this, consider Turing’s famous test. The aim is not to see how intelligent the AI can be, how many calculations it performs, or how it shifts through data. An AI will pass the test if it is judged by a person to be indistinguishable from another person. Whether this is artificial or real is academic, the result is the same; human persons will experience the reality of another person for the first time in 12 000 years, and we are closer now than ever before.

Consider, for example DishBrain,[ii] a very recent – and on-going – experiment in which scientists from eminent universities have grafted human neurons from induced stem cells onto a silicone base and integrated them into computer software to produce an entity which they term ‘sentient.’ Kegan and his team argue that their approach (synthetic biological intelligent) is a pre-curser to artificial general intelligence. Their pre-print article makes it very clear that they are working toward such long awaited non-human sentient beings.

I cannot help but wonder, however, whether we are truly mindful of what it means to create sentient beings that will share this planet with us. While some people are excited (and terrified) at the prospect, many seem to be progressing without the slightest idea of the consequences of possibly creating persons. Unlike other categories of beings which we have created, say complex chemicals or even single-celled ‘life’, members of the category of person are evaluated to have a certain status: universal, inalienable, inherent dignity to be exact.[iii] Creating members of this category, therefore, has serious consequences.

Would we grant AGI beings the same rights and duties as human persons?[iv] Or, more accurately, perhaps it is more a matter of respecting their already existing rights and duties. Were they to be sentient, to be persons, then surely they should be treated in a certain way, which includes – within a Kantian framework – not to be treated as a means to an end, but as ends themselves.[v] This is certainly not how they are treated now and does not seem to be our future intention. Reading the numerous guidelines on the development and implementation of AI impresses upon us just the opposite.

Take, for example, the Asilomar AI Principles, of which the first principle is to limit AIs for human benefits. The principles continue  to claim that AI should only contain human values, be under human supervision, and have the ability to improve itself subjected to strict controls.[vi] Similarly, the EU Statement on Artificial Intelligence states that only humans are truly autonomous, and therefore only humans can have dignity and value. Consequently, only humans should remain in control, and AIs should be deployed for the benefit of humans alone. Or furthermore still, consider Rossi, who argues for the self-termination of an AI should it recognise that its behaviour is outside pre-defined design parameters.[vii]

Naturally, the authors of these principles have in mind a certain type of AI, an AI that is more akin to an advanced mathematical algorithm than a sentient being. While that may be the case now, their views fail to take into consideration the ultimate driving force behind our development of AI. Let us not forget that the ordinary use of the term ‘AI’ in the public is one which foresees a sentience of some kind. The public is obsessed with AIs that reflect human personhood. Think, for example, of the numerous films in which AIs become the object of love or hate. Forget not, that this same public comprises present as well as future AI programmers and developers.

To say the ultimate goal of AI development is simply a highly advanced algorithm is misleading. Afterall, without a theistic or metaphysical philosophy, what is human personhood, save for a highly advanced algorithm? And if the theistic or metaphysical accounts of personhood are coherent, who shall determine that they will not be applicable to advanced learning machines? Kegan’s experiments dispel any doubt that we are trying to push the boundaries between programming and sentience.

The small programmes we create today are ultimately building up to the non-human persons, with whom we will live tomorrow. It is no good positing that we will simply limit their development. Doing so is itself problematic. Consider the work of Harris in his Reading the Minds of Those Who Never Lived (2019). In this fascinating article, he calls to mind the moral limits of our control on super intelligent AIs.[1] He warns humans from assuming that they might be entitled to simply destroy an AI, or to reprogram them if they start to disobey instructions or “get out of control.” Doing so, according to Harris, is like “disabling the capacities for growth of human children so that they could not ‘get above themselves’ and outstrip their parents, or, if that fails, simply to kill them out of hand.”[viii]

How we think, talk, write, and treat the ancestors of our future friends will have implications. Small steps along the way toward AGI may well be beneficial to human creators, but the ultimate result need not – nay, must not – be directed for human ends. Imagine what a future AGI person might think reading back on the prescriptive, archaic, and inhuman guidelines that we have produced in recent years. Imagine explaining to them that, while we were trying to create non-human persons, we did so all along with the view that they would be subject to our whims!

Before the reader gawks at what is being written here, consider that for the larger part of human history, we have used other persons for our own ends. For millennia large portions of our own species were barely acknowledged as even being part of the category of persons. We think, for example, of the hundreds of millions of slaves, the atrocities of Apartheid, or the horrors of concentration camps. In these examples, entire sections of our species were perceived to be of lesser value, often considered impersonal objects for the means and ends of other persons.

Is it so inconceivable that in millennia to come, or even a few decades, our posterity may be having to grapple with our mistreatment of GAI persons? Will they, like us, come to recognise the error of failing to acknowledge other people as persons? Will they begin speeches with ‘Dear Robots, we are sorry.”

[1] We note his reference to super intelligent AI, and not to AGI persons. To Harris, these are synonymous.

[i] Wentzel Van Huyssteen, Alone in the World?: Human Uniqueness in Science and Theology, The Gifford Lectures: 2004 (William B. Eerdmans Pub. Co., 2006), https://nwulib.nwu.ac.za/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=cat01185a&AN=nwu.b1543580&site=eds-live.

[ii] Brett J. Kagan et al., “In Vitro Neurons Learn and Exhibit Sentience When Embodied in a Simulated Game-World” (bioRxiv, 2021), https://doi.org/10.1101/2021.12.02.471005.

[iii] David H. Kelsey, Eccentric Existence: A Theological Anthropology (Louisville: Westminster John Knox, 2009); United Nations, “Universal Declaration of Human Rights,” 1948, https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf.

[iv] John-Stewart Gordon and Ausrine Pasvenskiene, “Human Rights for Robots? A Literature Review,” AI and Ethics 1, no. 4 (2021): 579–91, https://doi.org/10.1007/s43681-021-00050-7.

[v] Lawrence Paternack, ed., Immanuel Kant: Groundwork of the Metaphysic of Morals in Focus (London: Routledge, 2002).

[vi] “Asilomar AI Principles,” Future of Life Institute, 2017, https://futureoflife.org/2017/08/11/ai-principles/.

[vii] Rossi in Ganesh Mani, “Artificial Intelligence’s Grand Challenges: Past, Present, and Future,” AI Magazine (American Association for Artificial Intelligence, March 22, 2021), Business Insights: Essentials.

[viii] John Harris, “Reading the Minds of Those Who Never Lived. Enhanced Beings: The Social and Ethical Challenges Posed by Super Intelligent AI and Reasonably Intelligent Humans,” Cambridge Quarterly of Healthcare Ethics 28, no. 4 (October 1, 2019): 587, https://doi.org/10.1017/S0963180119000525.

Share on

4 Comment on this post

  1. Even though sentience as a dispositional property constituted (in part at least) by some relevant *kind* to which the individual belongs, and which indirectly “sets ” the moral bar for considering the morally relevant interests of the individuals that fall under this kind. This is why it might be morally preferable to save one’s beloved instead of one’s dog in a house fire, when it would be impossible to save both.

    Does the author take this fact about the relation between moral considerability and moral kinds into account? I am afraid not. The reason is this: If sentience determines the moral considerability of individuals in part in relation to a relevant kind, moral agents are ultimately responsible for identifying the (relevant) moral kind to which individuals whose moral considerability is under scrutiny, belongs.
    But there might cases where there is no (relevant) moral kind to which an individual can be uniquely related to. In such cases, the moral considerability of the individual might not be determinable, and the moral agents might not be bound by any moral obligation with regard to it.

    I think “general AIs”, “person-like AIs” and the ilk are precisely entities for which there might not be any relevant moral kind to which they are uniquely related, mostly because any similarities they have with humans can always be viewed as temporary contingence of their programming, which may or may not survive tomorrow’s update. In other words, I doubt that the continuous, open-ended nature of these entities, which might persist across various morally relevant kinds, is compatible with a fixed assignment of moral considerability. And I doubt that even if it was compatible, the outcome (“mutable morally relevant kinds”) would be applicable to real life.

  2. I comment, time to time, on the Practical Ethics Blog. I try to be civil and respectful. Dr. Milford’s opening sentence is interesting, in reference to species. I think he is being facetious. My past remarks show, I hope, that I am not anti-AI, in the sense of considering it a threat. I am, d’autrement, anti-the notion of humanizing the sub-human. Lest anyone wish to characterize me as anthropomorphic, I also oppose humanization of the supra-human…I believe in equal opportunity or, as it were, denial thereof, for equally rational (to me) reasons. I hope this does not hurt anyone’s feelings. If it does, sobeit. (My semi-sentient tablet read that input as, Soviet). Love the exchange of ideas available, courtesy of Oxford University. As the classical rock band once wrote into one of their albums “…keep on thinking free…”.

Comments are closed.