Skip to content

Peter Railton’s Uehiro Lectures 2022

Written by Maximilian Kiener

Professor Peter Railton, from the University of Michigan, delivered the 2022 Uehiro Lectures in Practical Ethics. In a series of three consecutive presentations entitled ‘Ethics and Artificial Intelligence’ Railton focused on what has become one the major areas in contemporary philosophy: the challenge of how to understand, interact with, and regulate AI.

Railton’s primary concern is not the ‘superintelligence’ that could vastly outperform humans and, as some have suggested, threaten human existence as a whole. Rather, Railton focuses on what we are already confronted with today, namely partially intelligent systems that increasingly execute a variety of tasks, from powering autonomous cars to assisting medical diagnostics, algorithmic decision-making, and more.

The potential benefits of such applications are vast, Railton explains, but AI also comes with significant risks of harm, discrimination, and polarisation. To minimise those risks, Railton urges, we need to ensure that those partially intelligent systems become appropriately sensitive to morally relevant features of situations, actions, and outcomes – both in their relations with humans and in their relations among themselves.

But how can we create the conditions for AI to learn how to take morally relevant features into account? Railton suggest taking a step back and looking at ‘moral learning’ more generally. Railton draws on neuroscience and psychology to explain how humans as well as non-human animals acquire moral capabilities throughout their cognitive and emotional development. Moreover, Railton argues that the ‘question then becomes whether artificial systems might be capable of similar cognitive and social development.’

From here, two central ideas emerge. First, there is an important connection between artificial intelligence and biological intelligence (i.e. human and animal) such that we may learn more about the one by looking at the other. In particular, Railton suggests that our neuroscientific and psychological knowledge about human intelligence can help us understand and model artificial intelligence. Second, given the parallels between artificial and human intelligence, we can extend models of human co-operation to include artificial agents too. More specifically, Railton suggests that, even though AI agents might not be full moral agents in the foreseeable future, we still can and should devise a new social contract to regulate the human-AI community.

In the rest of this blog, I shall restrict my attention to these two central ideas, reserving for later discussion many of the other imaginative and astute observations that Railton made.

 

1. Biological and Artificial Intelligence

A central thread in Railton’s lecture is the view that we can learn about artificial intelligence from our increasing understanding of biological intelligence and developmental psychology, and conversely increase our understanding of the latter through our greater understanding of AI. Connecting neuroscience and AI is Railton’s appeal that we use what we already know about brains to explore what we currently fail to understand about AI.

Railton’s suggestion is promising for several reasons. To begin with, significant parts of AI research are inspired by the human brain, especially AI research on so-called deep neural networks where artificial neurons, similar in some respects to biological neurons, play a crucial role. Thus, if we understand how human brains work, for instance how they acquire moral capabilities, how they process beliefs, and how motivation functions, then we may transfer these insights to artificial intelligence too. Moreover, the reverse direction may hold as well: AI can foster our understanding of creativity and imagination. For instance, consider Lee Sedol, the many-times world champion of the game of Go, who lost against the AI-powered AlphaGo and later said that playing against the AI changed his views about creativity, precisely because the AI employed strategies that were hitherto unknown to humans. In an interview, Lee Sedol said that what he used to see as creative now appears to him as mere convention.

Moreover, drawing on our knowledge of human brains and the development of social and moral capacities may also be helpful because AI, or so Railton argues, will need precisely those capacities that humans acquire in their cognitive development, such as the ability to guess other agents’ intentions, in order to perform its tasks properly. One of Railton’s key examples concerns the co-ordination and co-operation problems that autonomous vehicles face.

On the other hand, the connection between biological and artificial intelligence has limits too. In some ways, AI systems are very different from humans in how they perform certain tasks, so it becomes less plausible to think that human brains could be our sole guide to understanding AI. As Yavar Bathaee explained in the Harvard Journal of Law and Technology, there are so-called support vector machines that are ‘capable of finding geometric patterns in higher-dimensional space’ which humans can no longer visualise. In these and other cases, artificial reasoning becomes increasingly different from, and even alien to, human reasoning. Thus, any approach that aims to understand machine intelligence in terms of human intelligence will be limited.

Another indication of the divergence of human from machine intelligence is the fact that AI can be influenced in ways that would have no impact on humans at all. Image recognition provides a good example. Suppose AI correctly classifies a photo from the 2022 Uehiro Lectures as showing Peter Railton. But now let’s execute a so-called ‘input’ attack on the AI, i.e. let’s just slightly change some of the pixel values of the image by adding some digital dust in the right places. To the human eye, this change will be undetectable. Yet, if done expertly, the AI could be fooled into thinking that the image now shows Oxford’s Radcliffe Camera, Mount Everest, or just a tree. Thus, examples like these indicate too that the way in which AI systems ‘reason’ or ‘perceive’ the world is very different from how humans do it.

Overall, therefore, I think we should consider Railton’s approach in connecting machine intelligence to human and animal intelligence to be both a vital piece in understanding AI as well as an invitation to other approaches to join forces and build a comprehensive account that none of them could provide on its own.

 

2. AI and Social Contract

The second central thread in Railton’s lecture concerns the interaction between humans and AI, or just between AI systems alone.

Railton outlines how AI systems achieved success not only in the performance of isolated tasks, such as image identification and generation, or language recognition and translation, but also in open-ended strategic and co-operative game playing with other agents. In such contexts, Railton suggests that we should think of AI as our ‘ally’. An ally is not a person, but is more than just a tool. It is an agent capable of the distinct type of rationality that is needed for co-operation.

But if so, Railton suggest, the philosophical idea of a social contract, based on rational co-operation, may provide the framework for our interaction with AI. This suggestion is ingenious for various reasons.

First, many of our interactions with AI, and between AI systems themselves, could indeed be described as some form of rationality-based co-operation. Just as humans rely on their beliefs and goals in co-operation, AI systems possess equivalents to such beliefs and goals too, for instance with their Bayesian reasoning capacities and value functions. Thus, humans and AI may be capable of converging on the type of rationality inherent in co-operation.

Moreover, the father of modern Social Contract Theory, Thomas Hobbes, may well support this extension to AI, given that he provided a conception of rationality that is by no means restricted to humans. Hobbes said that

‘Reason (…) is nothing but Reckoning (that is, Adding and Substracting) of the Consequences of generall names agreed upon)’ [sic] (Leviathan, First Part, Chapter V: Of Reason and Science).

Both humans and AI are capable of such reckoning. So, if this type of rationality underlies the social contract, there is no principled reason why we could not also include AI as a party to the social contract. In fact, Hobbes himself already allowed for some artificial agents in his own theory. Hobbes’ Leviathan, the absolute sovereign in Hobbes’ state, is arguably an artificial and machine-like agent too.

Thus, Railton’s suggestion promises another significant advancement. If Railton is right, authors like Hobbes and other social contract theorists may hold invaluable insights for our understanding of human-AI interaction. It is here that Railton’s lectures once again show his outstanding ability to link current challenges presented by AI to traditional philosophy and its history.

Yet, this approach may have some limitations too. Traditionally, the rational parties in social contract theory have interests and they look for mutually beneficial co-operation with others whom they can trust and hold accountable. However, none of these aspects applies to AI systems in the same way as they apply to humans.

As several audience members pointed out, including Jeff McMahan, an interest (in the philosophically relevant understanding) is something the satisfaction of which increases well-being. Yet, AI systems lack well-being and so their ‘interests’ are not like human interests. In response, Railton conceded the difference between human and artificial interests. Yet, Railton insisted AI systems possess genuine interests insofar as certain events or actions in the world advance these system’s objectives, as defined by their value or reward functions.

A similar point can be raised about AI systems’ capacity of benefiting. Here too the absence of sentience and well-being may cast doubt on whether AI is genuinely able to benefit from anything at all, or whether the word ‘benefit’, like ‘interests’, is merely a metaphorical extension.

In addition, in what sense can we trust, rather than merely rely on, AI systems? One key difference between trust and reliance is that trust can be betrayed, whereas reliance can only be frustrated. If I trust my friend to help me move house, his not showing up betrays my trust. But if I rely on my computer for writing this blog, and the computer breaks down, I am not betrayed. My reliance is merely frustrated. The reason behind this difference is that we normally restrict trust to full moral agents, which Railton repeatedly emphasises AI systems are not. Thus, Railton operates with a more limited notion of trust that differs from its use in human interaction and social contract theory.

Finally, the idea of a contract is also essentially connected to its parties’ mutual accountability. However, and Railton agrees, AI systems are not (yet) accountable to humans or to each other in any morally relevant sense. The relation between humans and AI is, therefore, very different from the relation between humans themselves.

For these reasons, my conclusion about Railton’s second central idea is similar to my conclusion about his first idea. Railton again presents an ingenious approach to understanding and dealing with AI. Utilising social contract theory may indeed be an important part of the puzzle. Yet, at the same time, the central notions in social contract theory, viz. interests, benefits, trust, and accountability, receive a different meaning in Railton’s view so that the significance of social contract theory, as we have understood it so far, remains limited. I therefore suggest that Railton’s approach should be seen as an invitation for further research in this regard too. Railton himself mentioned that he had a full ‘spiel’ on the notions of artificial interests and benefits, and so we may look forward to hearing more about these notions and continue to learn from Railton’s insights.

 

Concluding Thoughts

It is most fortunate for the philosophy and the ethics of AI that eminent philosophers like Peter Railton are working on some of the most pressing challenges this domain raises. In his lectures, Railton skilfully applied his expertise in normative ethics, political philosophy, and the philosophy of science to advance our understanding of AI and how we should regulate and interact with it. Railton’s two key ideas, i.e. (1) the connection between machine intelligence and biological intelligence and (2) the social contract theory, updated for AI, enrich the debate in significant ways and carve out new lines of inquiry. It is indeed the hallmark of a great philosophical work that it allows us to see things in a different light and to connect inquiries that have hitherto led separate lives. Railton’s lectures have made an important contribution, and we may look forward to seeing their continuing impact on how we think about AI in the future.

Details of the lectures, including audio and video recordings plus transcript, can be found here.

Share on

2 Comment on this post

  1. Paul D. Van Pelt

    Sounds like a systematic thinker, not too caught up yet in moralisms. I still am overwhelmed by long-term implications of AI. But, insofar as it is on everyone’s mind and more so with passing time, it is unlikely to go away. What kind can imagine, man can do. Probably.
    Inasmuch as reality has changed, abundantly, in the last half-century. I read a lot of ideas. Some of those have been re-cycled from earlier notions. AI exists in its own reality, not as far as I know, recycled—unless one counts science fiction. Yes, level-headed, this thinker. And, systematic. Impressive.

  2. Shared some of this with an associate, who in turn sent me a talk Railton gave four years ago. The substance seemed consistent with what was described herein. My initial impressions appear fairly accurate. Functional morality seems a good theoretical characterization of what machine learning could become. For my limited understanding it makes some sense. AOK, here. Told my associate,as much. Now, then, my question is, as usual, is/will/would this be more or less useful? And, if so, why/when/ and for how long? I am skeptical of circumstances and contingencies—there are so many of those.

Comments are closed.