Skip to content

Artificial Intelligence

AI Authorship: Responsibility is Not Required

This is the fifth in a series of blogposts by the members of the Expanding Autonomy project, funded by the Arts and Humanities Research Council.

by Neil Levy

AI is rapidly being adopted across all segments of academia (as it is across much of society). The landscape is rapidly changing, and we haven’t yet settled on the norms that should govern how it’s used. Given how extensive usage already is, and how deeply integrated into every aspect of paper production, one important question concerns whether an AI can play the authorship role. Should AIs be credited, in the same way as humans might be?Read More »AI Authorship: Responsibility is Not Required

On Grief and Griefbots

Written by Cristina Voinea 

 This blogpost is a prepublication draft of an article forthcoming in THINK 

 

Large Language Models are all the hype right now. Amongst the things we can use them for, is the creation of digital personas, known as ‘griefbots’, that imitate the way people who passed away spoke and wrote. This can be achieved by inputting a person’s data, including their written works, blog posts, social media content, photos, videos, and more, into a Large Language Model such as ChatGPT. Unlike deepfakes, griefbots are dynamic digital entities that continuously learn and adapt. They can process new information, provide responses to questions, offer guidance, and even engage in discussions on current events or personal topics, all while echoing the unique voice and language patterns of the individuals they mimic. 

Numerous startups are already anticipating the growing demand for digital personas. Replika is one of the first companies to offer griefbots, although now they focus on providing more general AI companions, “always there to listen and talk, always on your side”. HereAfter AI offers the opportunity to capture one’s life story by engaging in dialogue with either a chatbot or a human biographer. This data is then harnessed and compiled with other data points to construct a lifelike replica of oneself that can then be offered to loved ones “for the holidays, Mother’s Day, Father’s Day, birthdays, retirements, and more.” Also, You, Only Virtual, is “pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love.”   

Read More »On Grief and Griefbots

Playing the Game of Faces with AI

Written by Edmond Awad

 

In the popular series “Game of Thrones” (and the corresponding “A Song of Ice and Fire” novels), the “Game of Faces” is a training method used by the Faceless Men, an enigmatic guild of assassins. This method teaches trainees to convincingly adopt the face of others for their covert missions.

The Game of Faces can be seen as a metaphor for the way we interact with others in the real world, as well as the way we present ourselves online. In the Game of Thrones TV series, the Faceless Men are able to change their appearance at will, which allows them to deceive others and get close to their targets. This ability can be seen as a symbol of the power of deception and manipulation.Read More »Playing the Game of Faces with AI

Finding Meaning in the Age of Neurocentrism – and in a Transhuman Future

 

 

Written by Mette Leonard Høeg

 

Through the ordinary state of being, we’re already creators in the most profound way, creating our experience of reality and composing the world we perceive.

Rick Rubin, The Creative Act

 

Phenomenal consciousness is still a highly mysterious phenomenon – mainly subjectively accessible, and there is far from scientific consensus on the explanation of its sources. The neuroscientific understanding of the human mind is, however, deepening, and the possibilities of technologically and biomedically altering brain and mind states and for engineering awareness in technological systems are developing rapidly. Read More »Finding Meaning in the Age of Neurocentrism – and in a Transhuman Future

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

Read More »Stay Clear of the Door

It is not about AI, it is about humans

Written by Alberto Giubilini

We might be forgiven for asking so frequently these days whether we should trust artificial intelligence. Too much has been written about the promises and perils of ChatGPT to escape the question. Upon reading both enthusiastic and concerned accounts of it, there seems to be very little the software cannot do. It can provide or fabricate a huge amount of information in the blink on an eye, reinterpret it and organize it into essays that seem written by humans, produce different forms of art (from figurative art to music, poetry, and so on) virtually indistinguishable from human-made art, and so much more.

It seems fair to ask how we can trust AI not to fabricate evidence, plagiarize, defame, serve anti-democratic political ends, violate privacy, and so on.

One possible answer is that we cannot. This could be true in two senses.

In a first sense, we cannot trust AI because it is not reliable. It gets things wrong too often, there is no way to figure out if it is wrong without doing ourselves the kind of research that the software was supposed to do for us, and it could be used in unethical ways. On this view, the right attitude towards AI is one of cautious distrust. What it does might well be impressive, but not reliable epistemically or ethically.

In a second sense, we cannot trust AI for the same reason why we cannot distrust it, either. Quite simply, trust (and distrust) is not the kind of attitude we can have towards tools. Unlike humans, tools are just means to our ends. They can be more or less reliable, but not more or less trustworthy. In order to trust, we need to have certain dispositions – or ‘reactive attitudes’, to use some philosophical jargon – that can only be appropriately directed at humans. According to Richard Holton’s account of ‘trust’, for instance, trust requires the readiness to feel betrayed by the individual you trust[1]. Or perhaps we can talk, less emphatically, of readiness to feel let down.

Read More »It is not about AI, it is about humans

ChatGPT Has a Sexual Harassment Problem

written by César Palacios-González

@CPalaciosG

If I were to post online that you have been accused of sexually harassing someone, you could rightly maintain that this is libellous. This is a false statement that damages your reputation. You could demand that I correct it and that I do so as soon as possible. The legal system could punish me for what I have done, and, depending on where I was in the world, it could send me to prison, fine me, and ask me to delete and retract my statements. Falsely accusing someone of sexual harassment is considered to be very serious.

In addition to the legal aspect there is also an ethical one. I have done something morally wrong, and more specifically, I have harmed you. We know this because, everything else being equal, if I had not falsely claimed that you have been accused of sexual harassment, you would be better off. This way of putting it might sound odd but it is not really so if we compare it to, for example, bodily harms. If I wantonly break your arm I harm you, and I do so because if I hadn’t done so you would be better off.Read More »ChatGPT Has a Sexual Harassment Problem

How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Written by David Lyreskog

Four images depicting ‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney.
‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney

 

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.Read More »How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

  • by

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Trenton Andrew Sewell 

Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.

This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners.Read More »Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism and the Case Against Moral Sta­tus for AIs

  • by

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Samuel Iglesias

 

In­tro­duc­tion

6.522. “There are, in­deed, things that can­not be put into words. They make them­selves man­i­fest. They are what is mys­ti­cal”. —Lud­wig Wittgen­stein, Trac­ta­tus Logi­co Philo­soph­icus.

What de­ter­mines whether an ar­ti­fi­cial in­tel­li­gence has moral sta­tus? Do men­tal states, such as the vivid and con­scious feel­ings of plea­sure or pain, mat­ter? Some ethicists ar­gue that “what goes on in the in­side mat­ters great­ly” (Ny­holm and Frank 2017). Oth­ers, like John Dana­her, ar­gue that “per­for­ma­tive ar­ti­fice, by it­self, can be suf­ficient to ground a claim of moral sta­tus” (2018). This view, called eth­i­cal be­hav­ior­ism, “re­spects our epis­temic lim­its” and states that if an en­ti­ty “con­sis­tent­ly be­haves like anoth­er en­ti­ty to whom we af­ford moral sta­tus, then it should be grant­ed the same moral sta­tus.”Read More »Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism and the Case Against Moral Sta­tus for AIs