Skip to content

Artificial Intelligence

(Bio)technology and what makes us human

Andrew Moeller, Faculty of History Alberto Giubilini, Uehiro Oxford Institute Themes from the conference “Biotechnology, Artificial Intelligence, and Human Identity” (Medical Humanities Programme) Crossposted from TORCH Medical Humanities The conference “Biotechnologies, Artificial Intelligence, and Human Identity” brought together a crowded room to hear 12 speakers engage in lively discussion on whether and how technologies such… Read More »(Bio)technology and what makes us human

NEW PUBLICATION: AI Morality

  • by

Edited by David Edmonds, Distinguished Research Fellow at the Oxford Uehiro Centre, this collection of lively and accessible essays covers topics such as healthcare, employment, autonomous weapons, online advertising and much more. A philosophical task force explores how AI is revolutionizing our lives – and what moral problems it might bring, showing us what to… Read More »NEW PUBLICATION: AI Morality

(Bio)technologies, human identity, and the Medical Humanities

Introducing two journal special issues and a conference Written by Alberto Giubilini Two special issues of the journals Bioethics and Monash Bioethics Review will be devoted to, respectively, “New (Bio)technology and Human Identity” and “Medical Humanities in the 21st Century” (academic readers, please consider submitting an article). Here I would like to briefly explain why… Read More »(Bio)technologies, human identity, and the Medical Humanities

AI Authorship: Responsibility is Not Required

This is the fifth in a series of blogposts by the members of the Expanding Autonomy project, funded by the Arts and Humanities Research Council.

by Neil Levy

AI is rapidly being adopted across all segments of academia (as it is across much of society). The landscape is rapidly changing, and we haven’t yet settled on the norms that should govern how it’s used. Given how extensive usage already is, and how deeply integrated into every aspect of paper production, one important question concerns whether an AI can play the authorship role. Should AIs be credited, in the same way as humans might be?Read More »AI Authorship: Responsibility is Not Required

On Grief and Griefbots

Written by Cristina Voinea 

 This blogpost is a prepublication draft of an article forthcoming in THINK 

 

Large Language Models are all the hype right now. Amongst the things we can use them for, is the creation of digital personas, known as ‘griefbots’, that imitate the way people who passed away spoke and wrote. This can be achieved by inputting a person’s data, including their written works, blog posts, social media content, photos, videos, and more, into a Large Language Model such as ChatGPT. Unlike deepfakes, griefbots are dynamic digital entities that continuously learn and adapt. They can process new information, provide responses to questions, offer guidance, and even engage in discussions on current events or personal topics, all while echoing the unique voice and language patterns of the individuals they mimic. 

Numerous startups are already anticipating the growing demand for digital personas. Replika is one of the first companies to offer griefbots, although now they focus on providing more general AI companions, “always there to listen and talk, always on your side”. HereAfter AI offers the opportunity to capture one’s life story by engaging in dialogue with either a chatbot or a human biographer. This data is then harnessed and compiled with other data points to construct a lifelike replica of oneself that can then be offered to loved ones “for the holidays, Mother’s Day, Father’s Day, birthdays, retirements, and more.” Also, You, Only Virtual, is “pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love.”   

Read More »On Grief and Griefbots

Playing the Game of Faces with AI

Written by Edmond Awad

 

In the popular series “Game of Thrones” (and the corresponding “A Song of Ice and Fire” novels), the “Game of Faces” is a training method used by the Faceless Men, an enigmatic guild of assassins. This method teaches trainees to convincingly adopt the face of others for their covert missions.

The Game of Faces can be seen as a metaphor for the way we interact with others in the real world, as well as the way we present ourselves online. In the Game of Thrones TV series, the Faceless Men are able to change their appearance at will, which allows them to deceive others and get close to their targets. This ability can be seen as a symbol of the power of deception and manipulation.Read More »Playing the Game of Faces with AI

Finding Meaning in the Age of Neurocentrism – and in a Transhuman Future

 

 

Written by Mette Leonard Høeg

 

Through the ordinary state of being, we’re already creators in the most profound way, creating our experience of reality and composing the world we perceive.

Rick Rubin, The Creative Act

 

Phenomenal consciousness is still a highly mysterious phenomenon – mainly subjectively accessible, and there is far from scientific consensus on the explanation of its sources. The neuroscientific understanding of the human mind is, however, deepening, and the possibilities of technologically and biomedically altering brain and mind states and for engineering awareness in technological systems are developing rapidly. Read More »Finding Meaning in the Age of Neurocentrism – and in a Transhuman Future

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

Read More »Stay Clear of the Door

It is not about AI, it is about humans

Written by Alberto Giubilini

We might be forgiven for asking so frequently these days whether we should trust artificial intelligence. Too much has been written about the promises and perils of ChatGPT to escape the question. Upon reading both enthusiastic and concerned accounts of it, there seems to be very little the software cannot do. It can provide or fabricate a huge amount of information in the blink on an eye, reinterpret it and organize it into essays that seem written by humans, produce different forms of art (from figurative art to music, poetry, and so on) virtually indistinguishable from human-made art, and so much more.

It seems fair to ask how we can trust AI not to fabricate evidence, plagiarize, defame, serve anti-democratic political ends, violate privacy, and so on.

One possible answer is that we cannot. This could be true in two senses.

In a first sense, we cannot trust AI because it is not reliable. It gets things wrong too often, there is no way to figure out if it is wrong without doing ourselves the kind of research that the software was supposed to do for us, and it could be used in unethical ways. On this view, the right attitude towards AI is one of cautious distrust. What it does might well be impressive, but not reliable epistemically or ethically.

In a second sense, we cannot trust AI for the same reason why we cannot distrust it, either. Quite simply, trust (and distrust) is not the kind of attitude we can have towards tools. Unlike humans, tools are just means to our ends. They can be more or less reliable, but not more or less trustworthy. In order to trust, we need to have certain dispositions – or ‘reactive attitudes’, to use some philosophical jargon – that can only be appropriately directed at humans. According to Richard Holton’s account of ‘trust’, for instance, trust requires the readiness to feel betrayed by the individual you trust[1]. Or perhaps we can talk, less emphatically, of readiness to feel let down.

Read More »It is not about AI, it is about humans

ChatGPT Has a Sexual Harassment Problem

written by César Palacios-González

@CPalaciosG

If I were to post online that you have been accused of sexually harassing someone, you could rightly maintain that this is libellous. This is a false statement that damages your reputation. You could demand that I correct it and that I do so as soon as possible. The legal system could punish me for what I have done, and, depending on where I was in the world, it could send me to prison, fine me, and ask me to delete and retract my statements. Falsely accusing someone of sexual harassment is considered to be very serious.

In addition to the legal aspect there is also an ethical one. I have done something morally wrong, and more specifically, I have harmed you. We know this because, everything else being equal, if I had not falsely claimed that you have been accused of sexual harassment, you would be better off. This way of putting it might sound odd but it is not really so if we compare it to, for example, bodily harms. If I wantonly break your arm I harm you, and I do so because if I hadn’t done so you would be better off.Read More »ChatGPT Has a Sexual Harassment Problem