Skip to content

AI

Friend AI: Personal Enhancement or Uninvited Company?

written by Christopher Register You can now pre-order a friend—or, a Friend, which is designed to be an AI friend. The small, round device contains AI-powered software and a microphone, and it’s designed to be worn on a lanyard around the neck at virtually any time. The austere product website says of Friend that, “When… Read More »Friend AI: Personal Enhancement or Uninvited Company?

Caution With Chatbots? Generative AI in Healthcare

  • by

Written by MSt in Practical Ethics student Dr Jeremy Gauntlett-Gilbert Human beings, as a species, love to tell stories and to imagine that there are person-like agents behind events. The Ancient Greeks saw the rivers and the winds as personalised deities, placating them if they appeared ‘angry’. Psychologists  in classic 1940s experiments were impressed at… Read More »Caution With Chatbots? Generative AI in Healthcare

Moral AI And How We Get There with Prof Walter Sinnott-Armstrong

  • by

Can we build and use AI ethically? Walter Sinnott-Armstrong discusses how this can be achieved in his new book ‘Moral AI and How We Get There’ co-authored with Jana Schaich Borg & Vincent Conitzer. Edmond Awad talks through the ethical implications for AI use with Walter in this short video. With thanks to the Atlantic… Read More »Moral AI And How We Get There with Prof Walter Sinnott-Armstrong

Would You Survive Brain Twinning?

Imagine the following case: A few years in the future, neuroscience has advanced considerably to the point where it is able to artificially support conscious activity that is just like the conscious activity in a human brain. After diagnosis of an untreatable illness, a patient, C, has transferred (uploaded) his consciousness to the artificial substrate… Read More »Would You Survive Brain Twinning?

(Bio)technologies, human identity, and the Medical Humanities

Introducing two journal special issues and a conference Written by Alberto Giubilini Two special issues of the journals Bioethics and Monash Bioethics Review will be devoted to, respectively, “New (Bio)technology and Human Identity” and “Medical Humanities in the 21st Century” (academic readers, please consider submitting an article). Here I would like to briefly explain why… Read More »(Bio)technologies, human identity, and the Medical Humanities

Cross Post: What’s wrong with lying to a chatbot?

Written by Dominic Wilkinson, Consultant Neonatologist and Professor of Ethics, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Imagine that you are on the waiting list for a non-urgent operation. You were seen in the clinic some months ago, but still don’t have a date for the procedure. It is extremely frustrating, but it seems that you will just have to wait.

However, the hospital surgical team has just got in contact via a chatbot. The chatbot asks some screening questions about whether your symptoms have worsened since you were last seen, and whether they are stopping you from sleeping, working, or doing your everyday activities.

Your symptoms are much the same, but part of you wonders if you should answer yes. After all, perhaps that will get you bumped up the list, or at least able to speak to someone. And anyway, it’s not as if this is a real person.Read More »Cross Post: What’s wrong with lying to a chatbot?

Political Campaigning, Microtargeting, and the Right to Information

Written by Cristina Voinea 

 

2024 is poised to be a challenging year, partly because of the important elections looming on the horizon – from the United States and various European countries to Russia (though, let us admit, surprises there might be few). As more than half of the global population is on social media, much of political communication and campaigning moved online. Enter the realm of online political microtargeting, a game-changer fueled by data and analytics innovations that changed the face of political campaigning.  

Microtargeting, a form of online targeted advertisement, relies on the collection, aggregation, and processing of both online and offline personal data to target individuals with the messages they will respond or react to. In political campaigns, microtargeting on social media platforms is used for delivering personalized political ads, attuned to the interests, beliefs, and concerns of potential voters. The objectives of political microtargeting are diverse, as it can be used to inform and mobilize or to confuse, scare, and demobilize. How does political microtargeting change the landscape of political campaigns? I argue that this practice is detrimental to democratic processes because it restricts voters’ right to information. (Privacy infringements are an additional reason but will not be the focus of this post). 

 Read More »Political Campaigning, Microtargeting, and the Right to Information

On Grief and Griefbots

Written by Cristina Voinea 

 This blogpost is a prepublication draft of an article forthcoming in THINK 

 

Large Language Models are all the hype right now. Amongst the things we can use them for, is the creation of digital personas, known as ‘griefbots’, that imitate the way people who passed away spoke and wrote. This can be achieved by inputting a person’s data, including their written works, blog posts, social media content, photos, videos, and more, into a Large Language Model such as ChatGPT. Unlike deepfakes, griefbots are dynamic digital entities that continuously learn and adapt. They can process new information, provide responses to questions, offer guidance, and even engage in discussions on current events or personal topics, all while echoing the unique voice and language patterns of the individuals they mimic. 

Numerous startups are already anticipating the growing demand for digital personas. Replika is one of the first companies to offer griefbots, although now they focus on providing more general AI companions, “always there to listen and talk, always on your side”. HereAfter AI offers the opportunity to capture one’s life story by engaging in dialogue with either a chatbot or a human biographer. This data is then harnessed and compiled with other data points to construct a lifelike replica of oneself that can then be offered to loved ones “for the holidays, Mother’s Day, Father’s Day, birthdays, retirements, and more.” Also, You, Only Virtual, is “pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love.”   

Read More »On Grief and Griefbots

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

Read More »Stay Clear of the Door