Skip to content

Video Interview: Prof Peter Railton, AI and moral obligations

In this Thinking Out Loud interview with Katrien Devolder, Philosophy Professor Peter Railton presents his take on how to understand, and interact with, AI. He talks about how AI can have moral obligations towards us, humans, and towards each other, and why we, humans, have moral obligations towards AI agents. He also stresses that the best way to tackle certain world problems, including the dangers of AI itself, is to form a strong community consisting of biological AND AI agents.

 

Share on

2 Comment on this post

  1. I am not there with the moral obligations commitment. Have not been successful in connecting the dots, leading from creators to creations. Seems to me that even when/ if the notion of God, as creator, is deleted the equation between thinking flesh and blood and thinking metal, plastic and wires sits right where it always has. Just because an inference is drawn regarding ‘how things are’ and ‘how they might be’, does not dictate what they must be, in order for there to be moral certainty. When we look around and among ourselves, we soon determine that ethics and morality are tentative. As attorneys might say, tongue-in-cheek: they depend. They depend on shared values; shared obligation and many other factors which must be agreed upon, in order for there to be understanding and agreement—in that specific order. Although there must be understanding, on some level, of how and why AI can be made to work, we have no compelling reasons to treat the creation ethically or morally. Put differently, equitable and fair treatment of artificial intelligence has no so-called moral compass. So, argue as you may. That tree has no dog barking at its’ base. It does not need one.

  2. In the interview Peter Railton comments that social groups (organisations) do not have feelings. I would dispute that as many conflicts have been fought over the ‘feelings’ of hurt or other such things felt by social groups (organisations). That becomes more visible where a large portion of the members are hurt because of the hurt to their organisation and as a result the organisation has to react in some way against the hurt/harm, or if it the hurt/harm is publicly visible some action becomes necessary to protect itself from further hurt/harm. Look to larger social groups, say nation states when any leadership is in a position it has to react to the feelings emanating from the membership after some perceived slight to that cultural base, for examples of feelings associated with social groups.
    Further, the overall message coming from the interview is that of control of the circumstances of AI, and a lasting impression was one of ‘maintenance’ with a more constrained coverage of aspirational or the forward movement for either mankind or AI, yet that is inherent within the purpose of the development of AI. That is understandable from a homocentric perspective with a long association with raising children in that same way. But will AI be a child? And if so for how long? Otherwise many of the arguments about individuals and social groups, the formation of values and moral themes resonated well.

Comments are closed.