Skip to content

Invited Guest posts

Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters

  • by

Written by: Julian Koplin & Joshua Hatherley, Monash University

ChatGPT is a variant of the GPT-3 language model developed by OpenAI. It is designed to generate human-like text in response to prompts given by users. As with any language model, ChatGPT is a tool that can be used for a variety of purposes, including academic research and writing. However, it is important to consider the ethical implications of using such a tool in academic contexts. The use of ChatGPT, or other large language models, to generate undergraduate essays raises a number of ethical considerations. One of the most significant concerns is the issue of academic integrity and plagiarism.

One concern is the potential for ChatGPT or similar language models to be used to produce work that is not entirely the product of the person submitting it. If a student were to use ChatGPT to generate significant portions of an academic paper or other written work, it would be considered plagiarism, as they would not be properly crediting the source of the material. Plagiarism is a serious offence in academia, as it undermines the integrity of the research process and can lead to the dissemination of false or misleading information.This is not only dishonest, but it also undermines the fundamental principles of academic scholarship, which is based on original research and ideas.

Another ethical concern is the potential for ChatGPT or other language models to be used to generate work that is not fully understood by the person submitting it. While ChatGPT and other language models can produce high-quality text, they do not have the same level of understanding or critical thinking skills as a human. As such, using ChatGPT or similar tools to generate work without fully understanding and critically evaluating the content could lead to the dissemination of incomplete or incorrect information.

In addition to the issue of academic integrity, the use of ChatGPT to generate essays also raises concerns about the quality of the work that is being submitted. Because ChatGPT is a machine learning model, it is not capable of original thought or critical analysis. It simply generates text based on the input data that it is given. This means that the essays generated by ChatGPT would likely be shallow and lacking in substance, and they would not accurately reflect the knowledge and understanding of the student who submitted them.

Furthermore, the use of ChatGPT to generate essays could also have broader implications for education and the development of critical thinking skills. If students were able to simply generate essays using AI, they would have little incentive to engage with the material and develop their own understanding and ideas. This could lead to a decrease in the overall quality of education, and it could also hinder the development of important critical thinking and problem-solving skills.

Overall, the use of ChatGPT to generate undergraduate essays raises serious ethical concerns. While these tools can be useful for generating ideas or rough drafts, it is important to properly credit the source of any material generated by the model and to fully understand and critically evaluate the content before incorporating it into one’s own work. It undermines academic integrity, it is likely to result in low-quality work, and it could have negative implications for education and the development of critical thinking skills. Therefore, it is important that students, educators, and institutions take steps to ensure that this practice is not used or tolerated.

Everything that you just read was generated by an AI

Read More »Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Read More »Three Observations about Justifying AI

Guest Post: Pandemic Ethics. Social Justice Demands Mass Surveillance: Social Distancing, Contact Tracing and COVID-19

  • by

Written by: Bryce Goodman

The spread of COVID-19 presents a number of ethical dilemmas. Should ventilators only be used to treat those who are most likely to recover from infection? How should violators of quarantine be punished? What is the right balance between protecting individual privacy and reducing the virus’ spread?

Most of the mitigation strategies pursued today (including in the US and UK) rely primarily on lock-downs or “social distancing” and not enough on contact tracing — the use of location data to identify who an infected individual may have come into contact with and infected. This balance prioritizes individual privacy above public health. But contact tracing will not only protect our overall welfare. It can also help address the disproportionately negative impact social distancing is having on our least well off.
Contact tracing “can achieve epidemic control if used by enough people,” says a recent paper published in Science. “By targeting recommendations to only those at risk, epidemics could be contained without need for mass quarantines (‘lock-downs’) that are harmful to society.” Once someone has tested positive for a virus, we can use that person’s location history to deduce whom they may have “contacted” and infected. For example, we might find that 20 people were in close proximity and 15 have now tested positive for the virus. Contact tracing would allow us to identify and test the other 5 before they spread the virus further.
The success of contact tracing will largely depend on the accuracy and ubiquity of a widespread testing program. Evidence thus far suggests that countries with extensive testing and contact tracing are able to avoid or relax social distancing restrictions in favor of more targeted quarantines.

Read More »Guest Post: Pandemic Ethics. Social Justice Demands Mass Surveillance: Social Distancing, Contact Tracing and COVID-19

A Proposal for Addressing Language Inequality in Academia

Written by Anri Asagumo

Oxford Uehiro/St Cross Scholar

Although more and more people see the importance of diversity in academia, language diversity is one type of diversity that seems to be diminishing: English is increasingly dominant in both areas. I would like to argue that people who are born and raised in an English-speaking country should be required to acquire a second language to the level they can write a rudimentary paper and give a presentation in that language in order to apply for international conferences and submit papers to international journals. The purpose of this requirement would be to address the significant inequality between native English-speakers and others. I focus on academia here, but ideally the same thing should be applied to the business world, too.Read More »A Proposal for Addressing Language Inequality in Academia

Cross Post: Re: Nudges in a Post-truth World 

Guest Post: Nathan Hodson

This article originally appeared on the Journal of Medical Ethics Blog 

In a recent article in the Journal of Medical EthicsNeil Levy has developed a concept of “nudges to reason,” offering a new tool for those trying to reconcile medical ethics with the application of behavioural psychological research – a practice known as nudging. Very roughly, nudging means adjusting the way choices are presented to the public in order to promote certain decisions.

As Levy notes, some people are concerned that nudges present a threat to autonomy. Attempts at reconciling nudges with ethics, then, are important because nudging in healthcare is here to stay but we need to ensure it is used in ways that respect autonomy (and other moral principles).Read More »Cross Post: Re: Nudges in a Post-truth World 

Guest Post: Crispr Craze and Crispr Cares

  • by

Written by Robert Ranisch, Institute for Ethics and History of Medicine, University of Tuebingen

@RobRanisch

Newly discovered tools for the targeted editing of the genome have been generating talk of a revolution in gene technology for the last five years. The CRISPR/Cas9-method draws most of the attention by enabling a more simple and precise, cheaper and quicker modification of genes in a hitherto unknown measure. Since these so-called molecular scissors can be set to work in just about all organisms, hardly a week goes by without headlines regarding the latest scientific research: Genome editing could keep vegetables looking fresh, eliminate malaria from disease-carrying mosquitoes, replace antibiotics or bring mammoths back to life.

Naturally, the greatest hopes are put into its potential for various medical applications. Despite the media hype, there are no ready-to-use CRISPR gene therapies. However, the first clinical studies are under way in China and have been approved in the USA. Future therapy methods might allow eradicating hereditary illnesses, conquering cancer, or even cure HIV/AIDS. Just this May, results from experiments on mice gave reason to hope for this. In a similar vein, germline intervention is being reconsidered as a realistic option now, although it had long been considered taboo because of how its (side)effects are passed down the generations.Read More »Guest Post: Crispr Craze and Crispr Cares

Invited Guest Post: Healthcare professionals need empathy too!

  • by

Written by Angeliki Kerasidou & Ruth Horn, The Ethox Centre, Nuffield Department of Population Health, University of Oxford

 

Recently, a number of media reports and personal testimonies have drawn attention to the intense physical and emotional stress to which doctors and nurses working in the NHS are exposed on a daily basis. Medical professionals are increasingly reporting feelings of exhaustion, depression, and even suicidal thoughts. Long working hours, decreasing numbers of staff, budget cuts and the lack of time to address patients’ needs are mentioned as some of the contributing factors (Campbell, 2015; The Guardian, 2016). Such factors have been linked with loss of empathy towards patients and, in some cases, with gross failures in their care (Francis, 2013).Read More »Invited Guest Post: Healthcare professionals need empathy too!

Cross Post: Women’s-Only Swimming Hours: Accommodation Is Not Discrimination

Written by Miriam Rosenbaum and Sajda Ouachtouki 

This article was originally published in First Things.

Women’s-only hours at swimming pools are nothing new. Many secular institutions have long hosted separate swim hours for women and girls who, for reasons of faith or personal preference, desire to swim without the presence of men. The list includes Barnard College, Harvard University, Yale University, and swim clubs, JCCs, and YMCAs across the country. Recently, women’s-only swimming hours have become a topic of debate, especially in New York, where promoters of liberal secularist ideology (including the editorial page of the New York Times) are campaigning against women’s-only hours at a public swimming pool on Bedford Avenue in Brooklyn. They claim that women’s-only swimming hours, even for a small portion of the day, must be abolished in the interest of “general fairness and equal access” and to avoid “discrimination” in favor of certain religions.Read More »Cross Post: Women’s-Only Swimming Hours: Accommodation Is Not Discrimination

Guest Post: Scientists aren’t always the best people to evaluate the risks of scientific research

Written by Simon Beard, Research Associate at the Center for the Study of Existential Risk, University of Cambridge

How can we study the pathogens that will be responsible for future global pandemics before they have happened? One way is to find likely candidates currently in the wild and genetically engineer them so that they gain the traits that will be necessary for them to cause a global pandemic.

Such ‘Gain of Function’ research that produces ‘Potential Pandemic Pathogens’ (GOF-PPP for short) is highly controversial. Following some initial trails looking at what kinds of mutations were needed to make avian influenza transmissible in ferrets, a moratorium has been imposed on further research whilst the risks and benefits associated with it are investigated.Read More »Guest Post: Scientists aren’t always the best people to evaluate the risks of scientific research

A Second Response to Professor Neil Levy’s Leverhulme Lectures.

  • by

Written by Richard Ngo , an undergraduate student in Computer Science and Philosophy at the University of Oxford.

Neil Levy’s Leverhulme Lectures start from the admirable position of integrating psychological results and philosophical arguments, with the goal of answering two questions:

(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?

(2) when those implicit attitudes cause actions which seem appropriately to be characterised as racist (sexist, homophobic…), are we morally responsible for these actions?Read More »A Second Response to Professor Neil Levy’s Leverhulme Lectures.