On Grief and Griefbots
Written by Cristina Voinea
This blogpost is a prepublication draft of an article forthcoming in THINK
Large Language Models are all the hype right now. Amongst the things we can use them for, is the creation of digital personas, known as ‘griefbots’, that imitate the way people who passed away spoke and wrote. This can be achieved by inputting a person’s data, including their written works, blog posts, social media content, photos, videos, and more, into a Large Language Model such as ChatGPT. Unlike deepfakes, griefbots are dynamic digital entities that continuously learn and adapt. They can process new information, provide responses to questions, offer guidance, and even engage in discussions on current events or personal topics, all while echoing the unique voice and language patterns of the individuals they mimic.
Numerous startups are already anticipating the growing demand for digital personas. Replika is one of the first companies to offer griefbots, although now they focus on providing more general AI companions, “always there to listen and talk, always on your side”. HereAfter AI offers the opportunity to capture one’s life story by engaging in dialogue with either a chatbot or a human biographer. This data is then harnessed and compiled with other data points to construct a lifelike replica of oneself that can then be offered to loved ones “for the holidays, Mother’s Day, Father’s Day, birthdays, retirements, and more.” Also, You, Only Virtual, is “pioneering advanced digital communications so that we Never Have to Say Goodbye to those we love.”
Playing the Game of Faces with AI
Written by Edmond Awad
In the popular series “Game of Thrones” (and the corresponding “A Song of Ice and Fire” novels), the “Game of Faces” is a training method used by the Faceless Men, an enigmatic guild of assassins. This method teaches trainees to convincingly adopt the face of others for their covert missions.
The Game of Faces can be seen as a metaphor for the way we interact with others in the real world, as well as the way we present ourselves online. In the Game of Thrones TV series, the Faceless Men are able to change their appearance at will, which allows them to deceive others and get close to their targets. This ability can be seen as a symbol of the power of deception and manipulation. Continue reading
Finding Meaning in the Age of Neurocentrism – and in a Transhuman Future
Written by Mette Leonard Høeg
Through the ordinary state of being, we’re already creators in the most profound way, creating our experience of reality and composing the world we perceive.
Rick Rubin, The Creative Act
Phenomenal consciousness is still a highly mysterious phenomenon – mainly subjectively accessible, and there is far from scientific consensus on the explanation of its sources. The neuroscientific understanding of the human mind is, however, deepening, and the possibilities of technologically and biomedically altering brain and mind states and for engineering awareness in technological systems are developing rapidly. Continue reading
Stay Clear of the Door
Written by David Lyreskog
In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.
In particular, I have been thinking about thinking with machines, with people, and what the difference is.
–
The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma.
As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.
It is not about AI, it is about humans
Written by Alberto Giubilini
We might be forgiven for asking so frequently these days whether we should trust artificial intelligence. Too much has been written about the promises and perils of ChatGPT to escape the question. Upon reading both enthusiastic and concerned accounts of it, there seems to be very little the software cannot do. It can provide or fabricate a huge amount of information in the blink on an eye, reinterpret it and organize it into essays that seem written by humans, produce different forms of art (from figurative art to music, poetry, and so on) virtually indistinguishable from human-made art, and so much more.
It seems fair to ask how we can trust AI not to fabricate evidence, plagiarize, defame, serve anti-democratic political ends, violate privacy, and so on.
One possible answer is that we cannot. This could be true in two senses.
In a first sense, we cannot trust AI because it is not reliable. It gets things wrong too often, there is no way to figure out if it is wrong without doing ourselves the kind of research that the software was supposed to do for us, and it could be used in unethical ways. On this view, the right attitude towards AI is one of cautious distrust. What it does might well be impressive, but not reliable epistemically or ethically.
In a second sense, we cannot trust AI for the same reason why we cannot distrust it, either. Quite simply, trust (and distrust) is not the kind of attitude we can have towards tools. Unlike humans, tools are just means to our ends. They can be more or less reliable, but not more or less trustworthy. In order to trust, we need to have certain dispositions – or ‘reactive attitudes’, to use some philosophical jargon – that can only be appropriately directed at humans. According to Richard Holton’s account of ‘trust’, for instance, trust requires the readiness to feel betrayed by the individual you trust[1]. Or perhaps we can talk, less emphatically, of readiness to feel let down.
ChatGPT Has a Sexual Harassment Problem
written by César Palacios-González
@CPalaciosG
If I were to post online that you have been accused of sexually harassing someone, you could rightly maintain that this is libellous. This is a false statement that damages your reputation. You could demand that I correct it and that I do so as soon as possible. The legal system could punish me for what I have done, and, depending on where I was in the world, it could send me to prison, fine me, and ask me to delete and retract my statements. Falsely accusing someone of sexual harassment is considered to be very serious.
In addition to the legal aspect there is also an ethical one. I have done something morally wrong, and more specifically, I have harmed you. We know this because, everything else being equal, if I had not falsely claimed that you have been accused of sexual harassment, you would be better off. This way of putting it might sound odd but it is not really so if we compare it to, for example, bodily harms. If I wantonly break your arm I harm you, and I do so because if I hadn’t done so you would be better off. Continue reading
How Brain-to-Brain Interfaces Will Make Things Difficult for Us
Written by David Lyreskog
A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes. Continue reading
Ethical Biological Naturalism and the Case Against Moral Status for AIs
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Samuel Iglesias
Introduction
6.522. “There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical”. —Ludwig Wittgenstein, Tractatus Logico Philosophicus.
What determines whether an artificial intelligence has moral status? Do mental states, such as the vivid and conscious feelings of pleasure or pain, matter? Some ethicists argue that “what goes on in the inside matters greatly” (Nyholm and Frank 2017). Others, like John Danaher, argue that “performative artifice, by itself, can be sufficient to ground a claim of moral status” (2018). This view, called ethical behaviorism, “respects our epistemic limits” and states that if an entity “consistently behaves like another entity to whom we afford moral status, then it should be granted the same moral status.” Continue reading
Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters
Written by: Julian Koplin & Joshua Hatherley, Monash University
ChatGPT is a variant of the GPT-3 language model developed by OpenAI. It is designed to generate human-like text in response to prompts given by users. As with any language model, ChatGPT is a tool that can be used for a variety of purposes, including academic research and writing. However, it is important to consider the ethical implications of using such a tool in academic contexts. The use of ChatGPT, or other large language models, to generate undergraduate essays raises a number of ethical considerations. One of the most significant concerns is the issue of academic integrity and plagiarism.
One concern is the potential for ChatGPT or similar language models to be used to produce work that is not entirely the product of the person submitting it. If a student were to use ChatGPT to generate significant portions of an academic paper or other written work, it would be considered plagiarism, as they would not be properly crediting the source of the material. Plagiarism is a serious offence in academia, as it undermines the integrity of the research process and can lead to the dissemination of false or misleading information.This is not only dishonest, but it also undermines the fundamental principles of academic scholarship, which is based on original research and ideas.
Another ethical concern is the potential for ChatGPT or other language models to be used to generate work that is not fully understood by the person submitting it. While ChatGPT and other language models can produce high-quality text, they do not have the same level of understanding or critical thinking skills as a human. As such, using ChatGPT or similar tools to generate work without fully understanding and critically evaluating the content could lead to the dissemination of incomplete or incorrect information.
In addition to the issue of academic integrity, the use of ChatGPT to generate essays also raises concerns about the quality of the work that is being submitted. Because ChatGPT is a machine learning model, it is not capable of original thought or critical analysis. It simply generates text based on the input data that it is given. This means that the essays generated by ChatGPT would likely be shallow and lacking in substance, and they would not accurately reflect the knowledge and understanding of the student who submitted them.
Furthermore, the use of ChatGPT to generate essays could also have broader implications for education and the development of critical thinking skills. If students were able to simply generate essays using AI, they would have little incentive to engage with the material and develop their own understanding and ideas. This could lead to a decrease in the overall quality of education, and it could also hinder the development of important critical thinking and problem-solving skills.
Overall, the use of ChatGPT to generate undergraduate essays raises serious ethical concerns. While these tools can be useful for generating ideas or rough drafts, it is important to properly credit the source of any material generated by the model and to fully understand and critically evaluate the content before incorporating it into one’s own work. It undermines academic integrity, it is likely to result in low-quality work, and it could have negative implications for education and the development of critical thinking skills. Therefore, it is important that students, educators, and institutions take steps to ensure that this practice is not used or tolerated.
Recent Comments