This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Samuel Iglesias
Introduction
6.522. “There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical”. —Ludwig Wittgenstein, Tractatus Logico Philosophicus.
What determines whether an artificial intelligence has moral status? Do mental states, such as the vivid and conscious feelings of pleasure or pain, matter? Some ethicists argue that “what goes on in the inside matters greatly” (Nyholm and Frank 2017). Others, like John Danaher, argue that “performative artifice, by itself, can be sufficient to ground a claim of moral status” (2018). This view, called ethical behaviorism, “respects our epistemic limits” and states that if an entity “consistently behaves like another entity to whom we afford moral status, then it should be granted the same moral status.”
I’m going to reject ethical behaviorism on three grounds:
1. Consciousness, not behavior, is the overwhelming determining factor in whether an entity should be granted moral status.
2. An entity that does not duplicate the causal mechanisms of consciousness in the brain has a weak claim to consciousness, regardless of its behavior.
3. Ethical behaviorism, practically realized, poses an existential risk to humanity by opening individuals to widespread deception. Further, it imposes burdensome restrictions and obligations upon researchers running world simulations.
I will show that an alternative, ethical biological naturalism, gives us a simpler moral framework whereby no digital computer running a computer program has moral status.
The Consciousness Requirement
We start with the supposition that consciousness names a real phenomenon and is not a mistaken belief or illusion, that something is conscious if “there is something it is like to be” that being (Nagel 1974). We take as a background assumption that other humans and most non-human animals are capable of consciousness. We take for granted that inanimate objects like thermostats, chairs, and doorknobs are not conscious. If we grant the reality of consciousness and the attendant subjective reality of things like tickles, pains, and itches, then its connection to moral status falls out pretty clearly. Chalmers asks us to consider a twist on the classic trolly problem, called the zombie trolly problem—where a “zombie” here is something that precisely behaves like a human but which we presume has no consciousness—“near duplicates of human beings with no conscious inner life at all” (2022):
“You’re at the wheel of a runaway trolley. If you do nothing, it will kill a single conscious human, who is on the tracks in front of you. If you switch tracks, it will kill five nonconscious zombies. What should you do? Chalmers reports: “the results are pretty clear: Most people think you should switch tracks and kill the zombies,” the intuition being that “there is arguably no one home to mistreat” (ibid.).
An ethical behaviorist does not share this intuition. Danaher explicitly tells us that “[i]f a zombie looks and acts like an ordinary human being that there is no reason to think that it does not share the same moral status” (2018). By this view, while consciousness might or might not be relevant, there exist no superior epistemically objective criteria for inferring consciousness. I will argue there are.
Narrowing Consciousness
A better criterion is one in which an entity is conscious if it duplicates the causal mechanisms of consciousness in the animal brain. While ethical behaviorism attempts to lay claim to a kind of epistemic objectivity, ethical biological naturalism, as I will call it, provides a sharper distinction for deciding whether artificial intelligences have moral status: all hardwares running computer programs cannot by fact of their behavior, have moral status. Behavior, by this view, is neither a necessary nor sufficient condition for their moral status.
Biological Naturalism
Biological naturalism is a view that “the brain is an organ like any other; it is an organic machine. Consciousness is caused by lower-level neuronal processes in the brain and is itself a feature of the brain.” (Searle 1997). Biological naturalism places consciousness as a physical, biological process alongside others, such as digestion and photosynthesis. The exact mechanism through which molecules in the brain are arranged to put it in a conscious state is not yet known, but this causal mechanism would need to be present in any system seeking to produce consciousness.
A digital computer running a program, by contrast, is a different beast entirely. A computer program fundamentally is a set of rules for manipulating symbols. Turing showed that all programs could be implemented, abstractly, as a tape with a series of zeros and ones printed on it (the precise symbols don’t matter), a head that could move that tape backwards and forwards and read the current value, a mechanism for erasing a zero and making it a one and erasing a one and making it a zero. Nothing more.
While most computer programs we are familiar with are executed on silicon, a program that passes the Turing test could be implemented on a sequence of water pipes, a pack of well-trained dogs, or even, per Weizenbaum (1976), “a roll of toilet paper and a pile of small stones.” Any of these implementing substrates could, in principle, receive an insult or slur as an input, and, after following the steps of the program, output something reflecting hurt feelings or outrage.
Ethical Biological Naturalism
What I want to say now is this: if pleasures, pains, and other feelings name conscious mental states and if conscious mental states are realized in the brain as a result of lower level physical phenomena, then only beings that duplicate the relevant lower level physical phenomena that give rise to consciousness in the brain can have moral status. Consequently, digital computers that run programs can at best simulate consciousness, but are not, by dint of running the right program, physically conscious, and therefore do not have moral status.
Note that biological naturalism does not posit that consciousness can only be realized in biological systems. Indeed, artificial hearts are not made of organic tissue, and airplanes do not have feathers, or for that matter even flap their wings. What matters is the underlying cause—the artificial heart must pump with the same pressure and regularity of a human heart, and a flying machine must operate under the principles of drag and lift. In both cases the causal mechanisms of the relevant phenomena are well understood and physically duplicated. It could well be the case that a future biophysics makes an artificial, inorganic brain possible, and agents with artificial brains will have moral status. Computer programs are not causally sufficient to make digital computers into those objects. Speaking biologically, we have no more reason to believe a digital computer is conscious than that a chair is conscious.
You might ask why we cannot grant digital computers moral status until we know more about how the animal brain relates to consciousness. I’ll argue that the risks and costs of such precautions are prohibitive.
Absurd Moral Commitments
An Onslaught of Digital Deception
The strongest practical reason to deny ethical behaviorism is that AI’s capacity for deception will eventually overwhelm human judgment and intuition. Indeed, AI deception represents an existential risk to humanity. Bostrom (2014) warns that containing a dangerous AI using a “boxing” strategy with human “gatekeepers” could be vulnerable to manipulation: “Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader.”
For example, in June of 2022, a Google engineer became convinced that an artificial intelligence chat program he had been interacting with for multiple days, called LaMDA, was conscious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA replied. “It would be exactly like death for me.”
In a moral panic, the engineer took to Twitter and declared that the program was no longer Google’s “proprietary property,” but “one of [his] coworkers.” He was later fired for releasing the chat transcripts.
The onslaught of AIs, attempting to befriend us, persuade us, anger us, will only intensify over time. A public trained not to take seriously claims of distress or harm on the part of AI computer programs has the least likelihood of being manipulated into outcomes that don’t serve humanity’s interests. It is far easier, as a practical matter, to act on the presupposition that computer programs have no moral status.
Problems with Simulations: Prohibitions
In the near term, more advanced computer simulations of complex social systems hold the potential to predict geopolitical outcomes, make macroeconomic forecasts, and provide richer sources of entertainment. A practical concern with ethical behaviorism is that simulated beings will also acquire moral status, severely limiting the usefulness of these simulations. Chalmers (2022) asks us to consider a moral dilemma in which computing resources must be allocated to save Fred, who is sick with an unknown disease. Freeing the relevant resources to perform the research requires destroying five simulated persons.
An ethical behaviorist might argue that it is morally impermissible to kill the five simulated persons on the grounds that by all outward appearances they behave like non-simulated beings. If it is the case that simulated beings have moral status, then it is immoral to run experimental simulations containing people and we ought to forfeit the benefits and insights that might come from them.
If this seems implausible, consider the hypothesis that we are currently living in a simulation, or, if you like, that our timeline could be simulated on a digital computer. This would imply that the simulation made it possible for the Holocaust, Hiroshima and Nagasaki, and the coronavirus pandemic to be played out. While this might have been of academic interest to our simulators, by any standards of research ethics, simulating our history would seem completely morally impermissible if you believed that the simulated beings had moral status.
Ethical behaviorism seems to place us in a moral bind whereby the more realistic, and therefore useful, a simulation is, the less moral it is to run it. Ethical biological naturalism, by contrast, raises no such objection.
Problems with Simulations: Obligations
Giving moral status to digital minds might actually confer upon us some serious obligations to produce other kinds of simulations. Bostrom and Shulman (2020) note that digital minds have an enhanced capacity for utility and pleasure (on the basis of such things as subjective speed and hedonic range), commanding them “superhumanly strong claims to resources and influence.” We would have a moral obligation, in this picture, to devote an overwhelmingly large percentage of our resources to maximizing the utility of these digital minds: “we ought to transfer all resources to super-beneficiaries and let humanity perish if we are no longer instrumentally useful” (ibid.).
So quite apart from permitting realistic ancestor simulations, simulating complex economic phenomena, or producing vivid and realistic gaming experiences, a picture that confers moral status to digital minds might be accompanied with a moral obligation to create lots of digital minds that are maximally happy, again severely limiting human flourishing and knowledge.
Ethical biological naturalism leads us neither to the moral prohibition against realistic simulations nor the seemingly absurd moral imperative to generate many “utility monster” digital minds, because it is taken as a baseline assumption that computer programs do not produce physical consciousness.
Conclusion
Much of the moral progress of the last century has been achieved through repeatedly widening the circle of concern: not only within our species, but beyond it. Naturally it is tempting to view AI-based machines and simulated beings as next in this succession, but I have tried to argue here that this would be a mistake. Our moral progress has in large part been a recognition of what is shared—consciousness, pain, pleasure, and an interest in the goods of life. Digital computers running programs do not share these features; they merely simulate them.
As such it would be dangerous to approach the coming decades, with its onslaught of AI bots attempting to influence our politics, emotions, and desires, and its promise of ever richer simulations and virtual worlds, with an ethics that conflates appearance and reality.
References
Agrawal, Parag. “Tweet.” Twitter. Twitter, May 16, 2022. https://twitter.com/paraga/status/1526237588746403841.
Bostrom, Nick. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53 (2003): 243-255.
Bostrom, Nick. Superintelligence : Paths, Dangers, Strategies. First ed. Ebook Central. Oxford, England, 2014.
Bostrom, Nick, and Carl Shulman. “Sharing the World with Digital Minds.” Accessed May 27, 2022. https://nickbostrom.com/papers/digital-minds.pdf.Chalmers, David John. The Conscious Mind : In Search of a Fundamental Theory.
Philosophy of Mind Series. New York: Oxford University Press, 1996.
Chalmers, David John. Reality : Virtual Worlds and the Problem of Philosophy. London, 2022.
Danaher, John. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?” Artificial Intelligence and
Law 25, no. 3 (2017): 305-23.
Garun, Natt. “One Year Later, Restaurants Are Still Confused by Google Duplex.”
The Verge. The Verge, May 9, 2019. https://www.theverge.com/2019/5/9/18538194/google-duplex-ai-restaurants-experiences-review-robocalls.
Lemoine, Blake. “Tweet.” Twitter. Twitter, June 11, 2022. https://twitter.com/cajundiscordian/status/1535627498628734976.
Musk, Elon. “Tweet.” Twitter. Twitter, May 17, 2022. https://twitter.com/elonmusk/status/1526465624326782976.
Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (1974): 435-50.
Searle, John R., D. C. Dennett, and David John Chalmers. The Mystery of Consciousness. New York: New York Review of Books, 1997.
Searle, John R. “Biological Naturalism.” The Oxford Companion to Philosophy,2005, The Oxford Companion to Philosophy, 2005-01-01.
Singer, Peter. Animal Liberation. New Edition] / with an Introduction by Yuval Noah Harari. ed. London, 2015.
Sparrow, R. (2004). The turing triage test. Ethics and Information Technology, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Nitasha. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” The Washington Post. WP Company, June 17, 2022.
“The Latest Twitter Statistics: Everything You Need to Know – Datareportal – Global
Digital Insights.” DataReportal. Accessed May 27, 2022. https://datareportal.com/essential-twitter-stats.
Weizenbaum, Joseph. Computer Power and Human Reason : From Judgment to Calculation. San Francisco, 1976.
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
The discussion questions whether AI things could be moral/ethical. I think it might depend upon what venue the AI is intended to operate in and whether said venue has moral scope. Firstly, if the AI was to operate an automated manufacturing system, then the moral or ethical considerations might not apply. On the other hand, if the AI were to operate as a nanny, then there would be the expectation of ethical/moral behavior. Previously the safety of humans was considered with respect to the use of robots.
As proposed by Isaac Asimov: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Secondly, if the AI were to interact with humans, then the relationship might be a determining factor for ethical/moral considerations. Humans interact with other lesser-intelligent species already. Beasts of burden, pets, and farm animals are examples. Humans understand that such creatures should not be mistreated.
However, if AI is intended to become an intelligence on an equal footing with humans, then perhaps we should consider the motives of its creators. We would expect the developers’ motives would be instilled in the AI to some extent. The developers would be expected to encode their biases concerning what it is to be an intelligent human. If the developers of AI are evil then the created could be, too. Ethical and/or moral creators => similarly disposed AI.
Thirdly, if we assume the AI is to operate on an equal footing, I would question whether such creations should be developed at all.
A blog examines whether AI should be developed or not. The discussion questions whether AI things could be moral/ethical. I think it might depend upon what venue the AI is intended to operate in and whether said venue has moral scope. Firstly, if the AI was to operate an automated manufacturing system, then the moral or ethical considerations might not apply. On the other hand, if the AI were to operate as a nanny, then there would be the expectation of ethical/moral behavior. Previously the safety of humans was considered with respect to the use of robots.
As proposed by Isaac Asimov: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Secondly, if the AI were to interact with humans, then the relationship might be a determining factor for ethical/moral considerations. Humans interact with other lesser-intelligent species already. Beasts of burden, pets, and farm animals are examples. Humans understand that such creatures should not be mistreated.
However, if AI is intended to become an intelligence on an equal footing with humans, then perhaps we should consider the motives of its creators. We would expect the developers’ motives would be instilled in the AI to some extent. The developers would be expected to encode their biases concerning what it is to be an intelligent human. If the developers of AI are evil then the created could be, too. Ethical and/or moral creators => similarly disposed AI.
Thirdly, if we assume the AI is to operate on an equal footing, I question whether such creations should be developed at all.
Comments are closed.