Guest Post: It has become possible to use cutting-edge AI language models to generate convincing high school and undergraduate essays. Here’s why that matters
Written by: Julian Koplin & Joshua Hatherley, Monash University
ChatGPT is a variant of the GPT-3 language model developed by OpenAI. It is designed to generate human-like text in response to prompts given by users. As with any language model, ChatGPT is a tool that can be used for a variety of purposes, including academic research and writing. However, it is important to consider the ethical implications of using such a tool in academic contexts. The use of ChatGPT, or other large language models, to generate undergraduate essays raises a number of ethical considerations. One of the most significant concerns is the issue of academic integrity and plagiarism.
One concern is the potential for ChatGPT or similar language models to be used to produce work that is not entirely the product of the person submitting it. If a student were to use ChatGPT to generate significant portions of an academic paper or other written work, it would be considered plagiarism, as they would not be properly crediting the source of the material. Plagiarism is a serious offence in academia, as it undermines the integrity of the research process and can lead to the dissemination of false or misleading information.This is not only dishonest, but it also undermines the fundamental principles of academic scholarship, which is based on original research and ideas.
Another ethical concern is the potential for ChatGPT or other language models to be used to generate work that is not fully understood by the person submitting it. While ChatGPT and other language models can produce high-quality text, they do not have the same level of understanding or critical thinking skills as a human. As such, using ChatGPT or similar tools to generate work without fully understanding and critically evaluating the content could lead to the dissemination of incomplete or incorrect information.
In addition to the issue of academic integrity, the use of ChatGPT to generate essays also raises concerns about the quality of the work that is being submitted. Because ChatGPT is a machine learning model, it is not capable of original thought or critical analysis. It simply generates text based on the input data that it is given. This means that the essays generated by ChatGPT would likely be shallow and lacking in substance, and they would not accurately reflect the knowledge and understanding of the student who submitted them.
Furthermore, the use of ChatGPT to generate essays could also have broader implications for education and the development of critical thinking skills. If students were able to simply generate essays using AI, they would have little incentive to engage with the material and develop their own understanding and ideas. This could lead to a decrease in the overall quality of education, and it could also hinder the development of important critical thinking and problem-solving skills.
Overall, the use of ChatGPT to generate undergraduate essays raises serious ethical concerns. While these tools can be useful for generating ideas or rough drafts, it is important to properly credit the source of any material generated by the model and to fully understand and critically evaluate the content before incorporating it into one’s own work. It undermines academic integrity, it is likely to result in low-quality work, and it could have negative implications for education and the development of critical thinking skills. Therefore, it is important that students, educators, and institutions take steps to ensure that this practice is not used or tolerated.
Everything that you just read was generated by an AI
Simulate Your True Self
Written by Muriel Leuenberger
A modified version of this post is forthcoming in Think edited by Stephen Law.
Spoiler warning: if you want to watch the movie Don’t Worry Darling, I advise you to not read this article beforehand (but definitely read it afterwards).
One of the most common reoccurring philosophical thought experiments in movies must be the simulation theory. The Matrix, The Truman Show, and Inception are only three of countless movies following the trope of “What if reality is a simulation?”. The most recent addition is Don’t Worry Darling by Olivia Wilde. In this movie, the main character Alice discovers that her idyllic 1950s-style housewife life in the company town of Victory, California, is a simulation. Some of the inhabitants of Victory (most men) are aware of this, such as her husband Jack who forced her into the simulation. Others (most women) share Alice’s unawareness. In the course of the movie, Alice’s memories of her real life return, and she manages to escape the simulation. This blog post is part of a series of articles in which Hazem Zohny, Mette Høeg, and I explore ethical issues connected to the simulation theory through the example of Don’t Worry Darling.
One question we may ask is whether living in a simulation, with a simulated and potentially altered body and mind, would entail giving up your true self or if you could come closer to it by freeing yourself from the constraints of reality. What does it mean to be true to yourself in a simulated world? Can you be real in a fake world with a fake body and fake memories? And would there be any value in trying to be authentic in a simulation?
Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality
By Maximilian Kiener. First published on the Public Ethics Blog
AI, Today and Tomorrow
77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars. And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.
Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.
There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI. Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.
Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists
by Anders Sandberg – Future of Humanity Institute, University of Oxford
Is there a future for humans in art? Over the last few weeks the question has been loudly debated online, as machine learning did a surprise charge into making pictures. One image won a state art fair. But artists complain that the AI art is actually a rehash of their art, a form of automated plagiarism that threatens their livelihood.
How do we ethically navigate the turbulent waters of human and machine creativity, business demands, and rapid technological change? Is it even possible?
In Defense of Obfuscation
Written by Mette Leonard Høeg
At the What’s the Point of Moral Philosophy congress held at the University of Oxford this summer, there was near-consensus among the gathered philosophers that clarity in moral philosophy and practical ethics is per definition good and obscurity necessarily bad. Michael J. Zimmerman explicitly praised clarity and accessibility in philosophical writings and criticised the lack of those qualities in especially continental philosophy, using some of Sartre’s more recalcitrant writing as a cautionary example (although also conceding that a similar lack of coherence can occasionally be found in analytical philosophy too). This seemed to be broadly and whole-heartedly supported by the rest of the participants.
Track Thyself? Personal Information Technology and the Ethics of Self-knowledge
Written by Muriel Leuenberger
The ancient Greek injunction “Know Thyself” inscribed at the temple of Delphi represents just one among many instances where we are encouraged to pursue self-knowledge. Socrates argued that “examining myself and others is the greatest good” and according to Kant moral self-cognition is ‘‘the First Command of all Duties to Oneself’’. Moreover, the pursuit of self-knowledge and how it helps us to become wiser, better, and happier is such a common theme in popular culture that you can find numerous lists online of the 10, 15, or 39 best movies and books on self-knowledge.
Peter Railton’s Uehiro Lectures 2022
Written by Maximilian Kiener
Professor Peter Railton, from the University of Michigan, delivered the 2022 Uehiro Lectures in Practical Ethics. In a series of three consecutive presentations entitled ‘Ethics and Artificial Intelligence’ Railton focused on what has become one the major areas in contemporary philosophy: the challenge of how to understand, interact with, and regulate AI.
Railton’s primary concern is not the ‘superintelligence’ that could vastly outperform humans and, as some have suggested, threaten human existence as a whole. Rather, Railton focuses on what we are already confronted with today, namely partially intelligent systems that increasingly execute a variety of tasks, from powering autonomous cars to assisting medical diagnostics, algorithmic decision-making, and more. Continue reading
Three Observations about Justifying AI
Written by: Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog
Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.
Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.
The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.
Ethics of the GameStop Short Squeeze
By Doug McConnell
Recently a large, loosely coordinated group of individual ‘retail investors’ have been buying up stocks that certain hedge funds had bet against (i.e. ‘shorted’). In doing so, the retail investors have driven up the price of those stocks. This has caused hedge funds that shorted the stock to lose billions of dollars and enabled a number of retail investors to get rich in the process. The phenomenon is anthropologically interesting because it is symbolic of a shift in power away from the traditional Wall Street players towards less wealthy, less well-connected individuals. But what are the ethics of this? Did Average Joe Trader just bring a measure of justice to Wall Street? Or did the mob unethically manipulate the market? If they did, are their actions any more unethical than the usual behaviour of institutional investors? Continue reading
Recent Comments