Skip to content

Moral Psychology at the Uehiro Centre for Practical Ethics

Moral Psychology at the Uehiro Centre for Practical Ethics

Written by Joanna Demaree-Cotton

 

This last Michaelmas term marked the inaugural series of lab meetings for the Uehiro Centre’s BioXPhi lab (https://moralpsychlab.web.ox.ac.uk). Co-directed by myself and Dr. Brian Earp, the lab brings philosophers together with psychologists to conduct experimental studies in moral psychology and bioethics. Specifically, we investigate the contributing factors and psychological processes that shape:

 

  • Moral intuitions, judgments and reasoning
  • Moral agency, moral action and moral motivation
  • The structure and application of (bio)ethical concepts

… with an eye to contributing to substantive normative and philosophical debates in ethics.

(What’s a “lab meeting”, you ask? Our lab meetings are where members of our lab come together with colleagues and collaborators to present and get feedback on ongoing research relevant to the experimental study of ethics.)

 

Just because people reason in a certain way about morality, that doesn’t mean that this is how we should reason about morality. People get things wrong all the time. Concepts can be incoherent. Reasoning can be flawed. Judgments can be biased and self-serving. Moral motivation can be weak. Moreover, people can often be ignorant or mistaken about many of the morally relevant details and nuances that apply to some particular situation, resulting in moral judgments that are simply ill-informed.

 

Yet, investigating ordinary moral psychology is invaluable for ethics and moral philosophy for a number of reasons (Earp, Demaree-Cotton et al., 2020; Earp, Lewis et al., 2021).

Read More »Moral Psychology at the Uehiro Centre for Practical Ethics

Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters

  • by

Written by: Julian Koplin & Joshua Hatherley, Monash University

ChatGPT is a variant of the GPT-3 language model developed by OpenAI. It is designed to generate human-like text in response to prompts given by users. As with any language model, ChatGPT is a tool that can be used for a variety of purposes, including academic research and writing. However, it is important to consider the ethical implications of using such a tool in academic contexts. The use of ChatGPT, or other large language models, to generate undergraduate essays raises a number of ethical considerations. One of the most significant concerns is the issue of academic integrity and plagiarism.

One concern is the potential for ChatGPT or similar language models to be used to produce work that is not entirely the product of the person submitting it. If a student were to use ChatGPT to generate significant portions of an academic paper or other written work, it would be considered plagiarism, as they would not be properly crediting the source of the material. Plagiarism is a serious offence in academia, as it undermines the integrity of the research process and can lead to the dissemination of false or misleading information.This is not only dishonest, but it also undermines the fundamental principles of academic scholarship, which is based on original research and ideas.

Another ethical concern is the potential for ChatGPT or other language models to be used to generate work that is not fully understood by the person submitting it. While ChatGPT and other language models can produce high-quality text, they do not have the same level of understanding or critical thinking skills as a human. As such, using ChatGPT or similar tools to generate work without fully understanding and critically evaluating the content could lead to the dissemination of incomplete or incorrect information.

In addition to the issue of academic integrity, the use of ChatGPT to generate essays also raises concerns about the quality of the work that is being submitted. Because ChatGPT is a machine learning model, it is not capable of original thought or critical analysis. It simply generates text based on the input data that it is given. This means that the essays generated by ChatGPT would likely be shallow and lacking in substance, and they would not accurately reflect the knowledge and understanding of the student who submitted them.

Furthermore, the use of ChatGPT to generate essays could also have broader implications for education and the development of critical thinking skills. If students were able to simply generate essays using AI, they would have little incentive to engage with the material and develop their own understanding and ideas. This could lead to a decrease in the overall quality of education, and it could also hinder the development of important critical thinking and problem-solving skills.

Overall, the use of ChatGPT to generate undergraduate essays raises serious ethical concerns. While these tools can be useful for generating ideas or rough drafts, it is important to properly credit the source of any material generated by the model and to fully understand and critically evaluate the content before incorporating it into one’s own work. It undermines academic integrity, it is likely to result in low-quality work, and it could have negative implications for education and the development of critical thinking skills. Therefore, it is important that students, educators, and institutions take steps to ensure that this practice is not used or tolerated.

Everything that you just read was generated by an AI

Read More »Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters

Cross Post: Halving Subsidised Psychology Appoints is a Grave Mistake—Young Australians Will Bear a Significant Burden 

  • by

Written by Dr Daniel D’Hotman, DPhil student studying mental health and ethics at the Oxford Uehiro Centre

The original version of this article was published in the Sydney Morning Herald

Unprecedented times called for unprecedented measures. COVID-19 was the most significant health crisis many of us had ever faced. While the physical effects were much discussed, the mental health burden was arguably just as devastating. In response, the previous Government doubled subsidised mental health appointments under the Better Access Program, allowing Australians suffering from mental illnesses like anxiety, PTSD and depression to claim an extra 10 appointments per year.

Now we are trying to convince ourselves COVID-19 and its impacts are over. In addition to requiring referrals for some PCR tests, the Australian Government is cutting the number of mental health visits available under Medicare to pre-pandemic levels, arguing this is a necessary step to improve equity. According to a review of the program, extra appointments clogged up waitlists and reduced access for those not engaging with services.Read More »Cross Post: Halving Subsidised Psychology Appoints is a Grave Mistake—Young Australians Will Bear a Significant Burden 

Abortion in Wonderland

By Charles Foster

 

 

Image: Heidi Crowter: Copyright Don’t Screen Us Out

Scene: A pub in central London

John: They did something worthwhile there today, for once, didn’t they? [He motions towards the Houses of Parliament]

Jane: What was that?

John: Didn’t you hear? They’ve passed a law saying that a woman can abort a child up to term if the child turns out to have red hair.

Jane: But I’ve got red hair!

John: So what? The law is about the fetus. It has nothing whatever to do with people who are actually born.

Jane: Eh?

That’s the gist of the Court of Appeal’s recent decision in the case of Aidan Lea-Wilson and Heidi Crowter (now married and known as Heidi Carter). Read More »Abortion in Wonderland

Simulate Your True Self

Written by Muriel Leuenberger

A modified version of this post is forthcoming in Think edited by Stephen Law.

Spoiler warning: if you want to watch the movie Don’t Worry Darling, I advise you to not read this article beforehand (but definitely read it afterwards).

One of the most common reoccurring philosophical thought experiments in movies must be the simulation theory. The Matrix, The Truman Show, and Inception are only three of countless movies following the trope of “What if reality is a simulation?”. The most recent addition is Don’t Worry Darling by Olivia Wilde. In this movie, the main character Alice discovers that her idyllic 1950s-style housewife life in the company town of Victory, California, is a simulation. Some of the inhabitants of Victory (most men) are aware of this, such as her husband Jack who forced her into the simulation. Others (most women) share Alice’s unawareness. In the course of the movie, Alice’s memories of her real life return, and she manages to escape the simulation. This blog post is part of a series of articles in which Hazem Zohny, Mette Høeg, and I explore ethical issues connected to the simulation theory through the example of Don’t Worry Darling.

One question we may ask is whether living in a simulation, with a simulated and potentially altered body and mind, would entail giving up your true self or if you could come closer to it by freeing yourself from the constraints of reality. What does it mean to be true to yourself in a simulated world? Can you be real in a fake world with a fake body and fake memories? And would there be any value in trying to be authentic in a simulation?

Read More »Simulate Your True Self

There Is No Such Thing As A Purely Logical Argument

Written By Mette Leonard Høeg

This blogpost is a prepublication draft of an article forthcoming in THINK.

Etching by J.F.P. Peyron, ca. 1773

It is well-known that rational insight and understanding of scientific facts do not necessarily lead to psychological change and shifts in intuitions. In his paper “Grief and the inconsolation of philosophy” (unpublished manuscript), Dominic Wilkinson sheds light on this gap between insight and emotions as he considers the potential of philosophy for offering consolation in relation to human mortality. More specifically, he looks at the possibility of Derek Parfit’s influential reductionist definition of personal identity for providing psychological consolation in the face of the death of oneself and of others. In Reasons and Persons, Parfit argues that personal identity is reducible to physical and psychological continuity of mental states, and that there is no additional fact, diachronic entity or essence that determines identity; and he points to the potential for existential liberation and consolation in adopting this anti-essentialist perspective: “Is the truth depressing? Some might find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air.”

Read More »There Is No Such Thing As A Purely Logical Argument

The Non-Rationality of Radical Human Enhancement and Transhumanism

Written by David Lyreskog

 

The human enhancement debate has over the last few decades been concerned with ethical issues in methods for improving the physical, cognitive, or emotive states of individual people, and of the human species as a whole. Arguments in favour of enhancement, particularly from transhumanists, typically defend it as a paradigm of rationality, presenting it as a clear-eyed, logical defence of what we stand to gain from transcending the typical limits of our species.Read More »The Non-Rationality of Radical Human Enhancement and Transhumanism

Nudges and Incomplete Preferences

Written by Sarah Raskoff

(Post is based on my recently published paper in Bioethics

Nudges are small changes in the presentation of options that make a predictable impact on people’s decisions. Proponents of nudges often claim that they are justified as paternalistic interventions that respect autonomy: they lead people to make better choices, while still allowing them to choose for themselves. A classic example is changing the location of food items in a cafeteria so that healthier choices are more salient. The salience of healthy foods predictably leads people to select them, even though they are still free to select the unhealthy options, too.

Nudges have become increasingly popular, but there are many objections to their widespread use. Some allege that nudges do not actually benefit people, while others suspect that they do not really respect autonomy. Although there are many ways of making sense of this latter concern, in a recent paper, I develop a new version of this objection, which takes as its starting point the observation that people often have incomplete preferences.Read More »Nudges and Incomplete Preferences

Guest Post: Could Laboratory Created Brains in the Future have Moral Status?

  • by

Written by Dominic McGuire, DPhil Student, Queen’s College Oxford

Jonathan Pugh’s interesting Practical Ethics blog of October 14th, 2022, https://blog.practicalethics.ox.ac.uk/2022/10/brain-cells-slime-mold-and-sentience-semantics/, prompted several additional thoughts. Pugh’s blog considered some of the implications from recent media reports about laboratory grown brains, also called minibrains, which can play the video game of Pong. Pong is a simple representation of the game of table tennis.

In his blog, Pugh concludes that the Pong playing minibrains are not sentient. This is because in his view they do not possess phenomenal consciousness and thus are unable to experience pain or pleasure. To some the property of phenomenal consciousness is an essential requirement for moral status. This is because they claim that only entities that are phenomenally conscious have the kinds of interests that warrant strong forms of moral protection.  Read More »Guest Post: Could Laboratory Created Brains in the Future have Moral Status?