Skip to content

AI Ethics

Image of the letters AI on a blue background with computer coding

National Oxford Uehiro Prize in Practical Ethics: Undisclosed Conversational AIs: A Threat to Users’ Autonomy

  • by

This article received an honourable mention in the graduate category of the 2024 National Oxford Uehiro Prize in Practical Ethics. Written by Beatrice Marchegiani. Introduction Recent advancements in Large Language Models have enabled AI systems to engage in conversations with users that are virtually indistinguishable from human interactions. The proliferation of advanced Conversational AIs (CAIs)1… Read More »National Oxford Uehiro Prize in Practical Ethics: Undisclosed Conversational AIs: A Threat to Users’ Autonomy

On Grief and Griefbots

Written by Cristina Voinea 

 This blogpost is a prepublication draft of an article forthcoming in THINK 

 

Large Language Models are all the hype right now. Amongst the things we can use them for, is the creation of digital personas, known as ‘griefbots’, that imitate the way people who passed away spoke and wrote. This can be achieved by inputting a person’s data, including their written works, blog posts, social media content, photos, videos, and more, into a Large Language Model such as ChatGPT. Unlike deepfakes, griefbots are dynamic digital entities that continuously learn and adapt. They can process new information, provide responses to questions, offer guidance, and even engage in discussions on current events or personal topics, all while echoing the unique voice and language patterns of the individuals they mimic. 

Numerous startups are already anticipating the growing demand for digital personas. Replika is one of the first companies to offer griefbots, although now they focus on providing more general AI companions, “always there to listen and talk, always on your side”. HereAfter AI offers the opportunity to capture one’s life story by engaging in dialogue with either a chatbot or a human biographer. This data is then harnessed and compiled with other data points to construct a lifelike replica of oneself that can then be offered to loved ones “for the holidays, Mother’s Day, Father’s Day, birthdays, retirements, and more.” Also, You, Only Virtual, is “pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love.”   

Read More »On Grief and Griefbots

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

Read More »Stay Clear of the Door

ChatGPT Has a Sexual Harassment Problem

written by César Palacios-González

@CPalaciosG

If I were to post online that you have been accused of sexually harassing someone, you could rightly maintain that this is libellous. This is a false statement that damages your reputation. You could demand that I correct it and that I do so as soon as possible. The legal system could punish me for what I have done, and, depending on where I was in the world, it could send me to prison, fine me, and ask me to delete and retract my statements. Falsely accusing someone of sexual harassment is considered to be very serious.

In addition to the legal aspect there is also an ethical one. I have done something morally wrong, and more specifically, I have harmed you. We know this because, everything else being equal, if I had not falsely claimed that you have been accused of sexual harassment, you would be better off. This way of putting it might sound odd but it is not really so if we compare it to, for example, bodily harms. If I wantonly break your arm I harm you, and I do so because if I hadn’t done so you would be better off.Read More »ChatGPT Has a Sexual Harassment Problem

Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

  • by

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Trenton Andrew Sewell 

Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.

This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners.Read More »Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

Guest Post: Dear Robots, We Are Sorry

  • by

Written by Stephen Milford, PhD

Institute for Biomedical Ethics, Basel University

 

The rise of AI presents humanity with an interesting prospect: a companion species. Ever since our last hominid cousins went extinct from the island of Flores almost 12,000 years ago, homo Sapiens have been alone in the world.[i] AI, true AI, offers us the unique opportunity to regain what was lost to us. Ultimately, this is what has captured our imagination and drives our research forward. Make no mistake, our intentions with AI are clear: artificial general intelligence (AGI). A being that is like us, a personal being (whatever person may mean).

If any of us are in any doubt about this, consider Turing’s famous test. The aim is not to see how intelligent the AI can be, how many calculations it performs, or how it shifts through data. An AI will pass the test if it is judged by a person to be indistinguishable from another person. Whether this is artificial or real is academic, the result is the same; human persons will experience the reality of another person for the first time in 12 000 years, and we are closer now than ever before.Read More »Guest Post: Dear Robots, We Are Sorry

The ABC of Responsible AI

Written by Maximilian Kiener

 

Amazon’s Alexa recently told a ten-year-old girl to touch a live plug with a penny, encouraging the girl to do what could potentially lead to severe burns or even the loss of an entire limb.[1] Fortunately, the girl’s mother heard Alexa’s suggestion, intervened, and made sure her daughter stayed safe.

But what if the girl had been hurt? Who would have been responsible: Amazon for creating Alexa, the parents for not watching their daughter, or the licensing authorities for allowing Alexa to enter the market?

Read More »The ABC of Responsible AI

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Read More »Three Observations about Justifying AI