Skip to content

artificial intelligence

Humanizing Chatbots Is Hard To Resist — But Why?

Written by Madeline G. Reinecke (@mgreinecke) You might recall a story from a few years ago, concerning former Google software engineer Blake Lemoine. Part of Lemoine’s job was to chat with LaMDA, a large language model (LLM) in development at the time, to detect discriminatory speech. But the more Lemoine chatted with LaMDA, the more… Read More »Humanizing Chatbots Is Hard To Resist — But Why?

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

Read More »Stay Clear of the Door

It is not about AI, it is about humans

Written by Alberto Giubilini

We might be forgiven for asking so frequently these days whether we should trust artificial intelligence. Too much has been written about the promises and perils of ChatGPT to escape the question. Upon reading both enthusiastic and concerned accounts of it, there seems to be very little the software cannot do. It can provide or fabricate a huge amount of information in the blink on an eye, reinterpret it and organize it into essays that seem written by humans, produce different forms of art (from figurative art to music, poetry, and so on) virtually indistinguishable from human-made art, and so much more.

It seems fair to ask how we can trust AI not to fabricate evidence, plagiarize, defame, serve anti-democratic political ends, violate privacy, and so on.

One possible answer is that we cannot. This could be true in two senses.

In a first sense, we cannot trust AI because it is not reliable. It gets things wrong too often, there is no way to figure out if it is wrong without doing ourselves the kind of research that the software was supposed to do for us, and it could be used in unethical ways. On this view, the right attitude towards AI is one of cautious distrust. What it does might well be impressive, but not reliable epistemically or ethically.

In a second sense, we cannot trust AI for the same reason why we cannot distrust it, either. Quite simply, trust (and distrust) is not the kind of attitude we can have towards tools. Unlike humans, tools are just means to our ends. They can be more or less reliable, but not more or less trustworthy. In order to trust, we need to have certain dispositions – or ‘reactive attitudes’, to use some philosophical jargon – that can only be appropriately directed at humans. According to Richard Holton’s account of ‘trust’, for instance, trust requires the readiness to feel betrayed by the individual you trust[1]. Or perhaps we can talk, less emphatically, of readiness to feel let down.

Read More »It is not about AI, it is about humans

ChatGPT Has a Sexual Harassment Problem

written by César Palacios-González

@CPalaciosG

If I were to post online that you have been accused of sexually harassing someone, you could rightly maintain that this is libellous. This is a false statement that damages your reputation. You could demand that I correct it and that I do so as soon as possible. The legal system could punish me for what I have done, and, depending on where I was in the world, it could send me to prison, fine me, and ask me to delete and retract my statements. Falsely accusing someone of sexual harassment is considered to be very serious.

In addition to the legal aspect there is also an ethical one. I have done something morally wrong, and more specifically, I have harmed you. We know this because, everything else being equal, if I had not falsely claimed that you have been accused of sexual harassment, you would be better off. This way of putting it might sound odd but it is not really so if we compare it to, for example, bodily harms. If I wantonly break your arm I harm you, and I do so because if I hadn’t done so you would be better off.Read More »ChatGPT Has a Sexual Harassment Problem

Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters

  • by

Written by: Julian Koplin & Joshua Hatherley, Monash University

ChatGPT is a variant of the GPT-3 language model developed by OpenAI. It is designed to generate human-like text in response to prompts given by users. As with any language model, ChatGPT is a tool that can be used for a variety of purposes, including academic research and writing. However, it is important to consider the ethical implications of using such a tool in academic contexts. The use of ChatGPT, or other large language models, to generate undergraduate essays raises a number of ethical considerations. One of the most significant concerns is the issue of academic integrity and plagiarism.

One concern is the potential for ChatGPT or similar language models to be used to produce work that is not entirely the product of the person submitting it. If a student were to use ChatGPT to generate significant portions of an academic paper or other written work, it would be considered plagiarism, as they would not be properly crediting the source of the material. Plagiarism is a serious offence in academia, as it undermines the integrity of the research process and can lead to the dissemination of false or misleading information.This is not only dishonest, but it also undermines the fundamental principles of academic scholarship, which is based on original research and ideas.

Another ethical concern is the potential for ChatGPT or other language models to be used to generate work that is not fully understood by the person submitting it. While ChatGPT and other language models can produce high-quality text, they do not have the same level of understanding or critical thinking skills as a human. As such, using ChatGPT or similar tools to generate work without fully understanding and critically evaluating the content could lead to the dissemination of incomplete or incorrect information.

In addition to the issue of academic integrity, the use of ChatGPT to generate essays also raises concerns about the quality of the work that is being submitted. Because ChatGPT is a machine learning model, it is not capable of original thought or critical analysis. It simply generates text based on the input data that it is given. This means that the essays generated by ChatGPT would likely be shallow and lacking in substance, and they would not accurately reflect the knowledge and understanding of the student who submitted them.

Furthermore, the use of ChatGPT to generate essays could also have broader implications for education and the development of critical thinking skills. If students were able to simply generate essays using AI, they would have little incentive to engage with the material and develop their own understanding and ideas. This could lead to a decrease in the overall quality of education, and it could also hinder the development of important critical thinking and problem-solving skills.

Overall, the use of ChatGPT to generate undergraduate essays raises serious ethical concerns. While these tools can be useful for generating ideas or rough drafts, it is important to properly credit the source of any material generated by the model and to fully understand and critically evaluate the content before incorporating it into one’s own work. It undermines academic integrity, it is likely to result in low-quality work, and it could have negative implications for education and the development of critical thinking skills. Therefore, it is important that students, educators, and institutions take steps to ensure that this practice is not used or tolerated.

Everything that you just read was generated by an AI

Read More »Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters

Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists

Stable diffusion image, prompt: "Reflective equilibrium in a turbulent lake. Painting by Greg Rutkowski" by Anders Sandberg – Future of Humanity Institute, University of Oxford

Is there a future for humans in art? Over the last few weeks the question has been loudly debated online, as machine learning did a surprise charge into making pictures. One image won a state art fair. But artists complain that the AI art is actually a rehash of their art, a form of automated plagiarism that threatens their livelihood.

How do we ethically navigate the turbulent waters of human and machine creativity, business demands, and rapid technological change? Is it even possible?

Read More »Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists

AI and the Transition Paradox

When Will AI Exceed Human Performance? Evidence from AI Experts. https://arxiv.org/abs/1705.08807

by Aksel Braanen Sterri

The most important development in human history will take place not too far in the future. Artificial intelligence, or AI for short, will become better (and cheaper) than humans at most tasks. This will generate enormous wealth that can be used to fill human needs.

However, since most humans will not be able to compete with AI, there will be little demand for ordinary people’s labour-power. The immediate effect of a world without work is that people will lose their primary source of income and whatever meaning, mastery, sense of belonging and status they get from their work. Our collective challenge is to find meaning and other ways to reliably get what we need in this new world.

Read More »AI and the Transition Paradox

Hedonism, the Experience Machine, and Virtual Reality

By Roger Crisp

I take hedonism about well-being or welfare to be the view that the only thing that is good for any being is pleasure, and that what makes pleasure good is nothing other than its being pleasant. The standard objections to hedonism of this kind have mostly been of the same form: there are things other than pleasure that are good, and pleasantness isn’t the only property that makes things good.Read More »Hedonism, the Experience Machine, and Virtual Reality

Judgebot.exe Has Encountered a Problem and Can No Longer Serve

Written by Stephen Rainey

Artificial intelligence (AI) is anticipated by many as having the potential to revolutionise traditional fields of knowledge and expertise. In some quarters, this has led to fears about the future of work, with machines muscling in on otherwise human work. Elon Musk is rattling cages again in this context with his imaginary ‘Teslabot’. Reports on the future of work have included these replacement fears for administrative jobs, service and care roles, manufacturing, medical imaging, and the law.

In the context of legal decision-making, a job well done includes reference to prior cases as well as statute. This is, in part, to ensure continuity and consistency in legal decision-making. The more that relevant cases can be drawn upon in any instance of legal decision-making, the better the possibility of good decision-making. But given the volume of legal documentation and the passage of time, there may be too much for legal practitioners to fully comprehend.

Read More »Judgebot.exe Has Encountered a Problem and Can No Longer Serve

A Sad Victory

I recently watched the documentary AlphaGo, directed by Greg Kohs. The film tells the story of the refinement of AlphaGo—a computer Go program built by DeepMind—and tracks the match between AlphaGo and 18-time world champion in Go Lee Sedol.

Go is an ancient Chinese board game. It was considered one of the four essential arts of aristocratic Chinese scholars. The goal is to end the game having captured more territory than your opponent. What makes Go a particularly interesting game for AI to master is, first, its complexity. Compared to chess, Go has a larger board, and many more alternatives to consider per move. The number of possible moves in a given position is about 20 in chess; in Go, it’s about 200. The number of possible configurations of the board is more than the number of atoms in the universe. Second, Go is a game in which intuition is believed to play a big role. When professionals get asked why they played a particular move, they will often respond something to the effect that ‘it felt right’. It is this intuitive quality why Go is sometimes considered an art, and Go players artists. For a computer program to beat human Go players, then, it would have to mimic human intuition (or, more precisely, mimic the results of human intuition).

Read More »A Sad Victory