Simulate Your True Self
Written by Muriel Leuenberger
A modified version of this post is forthcoming in Think edited by Stephen Law.
Spoiler warning: if you want to watch the movie Don’t Worry Darling, I advise you to not read this article beforehand (but definitely read it afterwards).
One of the most common reoccurring philosophical thought experiments in movies must be the simulation theory. The Matrix, The Truman Show, and Inception are only three of countless movies following the trope of “What if reality is a simulation?”. The most recent addition is Don’t Worry Darling by Olivia Wilde. In this movie, the main character Alice discovers that her idyllic 1950s-style housewife life in the company town of Victory, California, is a simulation. Some of the inhabitants of Victory (most men) are aware of this, such as her husband Jack who forced her into the simulation. Others (most women) share Alice’s unawareness. In the course of the movie, Alice’s memories of her real life return, and she manages to escape the simulation. This blog post is part of a series of articles in which Hazem Zohny, Mette Høeg, and I explore ethical issues connected to the simulation theory through the example of Don’t Worry Darling.
One question we may ask is whether living in a simulation, with a simulated and potentially altered body and mind, would entail giving up your true self or if you could come closer to it by freeing yourself from the constraints of reality. What does it mean to be true to yourself in a simulated world? Can you be real in a fake world with a fake body and fake memories? And would there be any value in trying to be authentic in a simulation?
Guest Post: Dear Robots, We Are Sorry
Written by Stephen Milford, PhD
Institute for Biomedical Ethics, Basel University
The rise of AI presents humanity with an interesting prospect: a companion species. Ever since our last hominid cousins went extinct from the island of Flores almost 12,000 years ago, homo Sapiens have been alone in the world.[i] AI, true AI, offers us the unique opportunity to regain what was lost to us. Ultimately, this is what has captured our imagination and drives our research forward. Make no mistake, our intentions with AI are clear: artificial general intelligence (AGI). A being that is like us, a personal being (whatever person may mean).
If any of us are in any doubt about this, consider Turing’s famous test. The aim is not to see how intelligent the AI can be, how many calculations it performs, or how it shifts through data. An AI will pass the test if it is judged by a person to be indistinguishable from another person. Whether this is artificial or real is academic, the result is the same; human persons will experience the reality of another person for the first time in 12 000 years, and we are closer now than ever before. Continue reading
We Need To Have A Conversation About “We Need To Have A Conversation”
By Ben Davies
When new technologies emerge, ethical questions inevitably arise about their use. Scientists with relevant expertise will be invited to speak on radio, on television, and in newspapers (sometimes ethicists are asked, too, but this is rarer). In many such cases, a particular phrase gets used when the interview turns to potential ethical issues:
“We need to have a conversation”.
It would make for an interesting qualitative research paper to analyse media interviews with scientists to see how often this phrase comes up (perhaps it seems more prevalent to me than it really is because I’ve become particularly attuned to it). Having not done that research, my suggestion that this is a common response should be taken with a pinch of salt. But it’s undeniably a phrase that gets trotted out. And I want to suggest that there are at least two issues with it. Neither of these issues is necessarily tied together with using this phrase—it’s entirely possible to use it without raising either—but they arise frequently.
In keeping with the stereotype of an Anglophone philosopher, I’m going to pick up on a couple of key terms in a phrase and ask what they mean. First, though, I’ll offer a brief, qualified defence of this phrase. My aim in raising these issues isn’t to attack scientists who use it, but rather to ask that a bit more thought is put into what is, at heart, a reasonable response to ethical complexity.
Track Thyself? Personal Information Technology and the Ethics of Self-knowledge
Written by Muriel Leuenberger
The ancient Greek injunction “Know Thyself” inscribed at the temple of Delphi represents just one among many instances where we are encouraged to pursue self-knowledge. Socrates argued that “examining myself and others is the greatest good” and according to Kant moral self-cognition is ‘‘the First Command of all Duties to Oneself’’. Moreover, the pursuit of self-knowledge and how it helps us to become wiser, better, and happier is such a common theme in popular culture that you can find numerous lists online of the 10, 15, or 39 best movies and books on self-knowledge.
Judgebot.exe Has Encountered a Problem and Can No Longer Serve
Written by Stephen Rainey
Artificial intelligence (AI) is anticipated by many as having the potential to revolutionise traditional fields of knowledge and expertise. In some quarters, this has led to fears about the future of work, with machines muscling in on otherwise human work. Elon Musk is rattling cages again in this context with his imaginary ‘Teslabot’. Reports on the future of work have included these replacement fears for administrative jobs, service and care roles, manufacturing, medical imaging, and the law.
In the context of legal decision-making, a job well done includes reference to prior cases as well as statute. This is, in part, to ensure continuity and consistency in legal decision-making. The more that relevant cases can be drawn upon in any instance of legal decision-making, the better the possibility of good decision-making. But given the volume of legal documentation and the passage of time, there may be too much for legal practitioners to fully comprehend.
Oxford Uehiro Prize in Practical Ethics: What, if Anything, is Wrong About Algorithmic Administration?
This essay received an honourable mention in the undergraduate category.
Written by University of Oxford student, Angelo Ryu.
Introduction
The scope of modern administration is vast. We expect the state to perform an ever-increasing number of tasks, including the provision of services and the regulation of economic activity. This requires the state to make a large number of decisions in a wide array of areas. Inevitably, the scale and complexity of such decisions stretch the capacity of good governance.
In response, policymakers have begun to implement systems capable of automated decision making. For example, certain jurisdictions within the United States use an automated system to advise on criminal sentences. Australia uses an automated system for parts of its welfare program.
Such systems, it is said, will help address the costs of modern administration. It is plausibly argued that automation will lead to quicker, efficient, and more consistent decisions – that it will ward off a return to the days of Dickens’ Bleak House. Continue reading
Oxford Uehiro Prize in Practical Ethics: Why Is Virtual Wrongdoing Morally Disquieting, Insofar As It Is?
This essay was the winning entry in the undergraduate category of the 6th Annual Oxford Uehiro Prize in Practical Ethics.
Written by University of Oxford student, Eric Sheng.
In the computer game Red Dead Redemption 2 (henceforward, RDR2), players control a character in a virtual world. Among the characters represented by computer graphics but not controlled by a real-world player are suffragettes. Controversy arose when it became known that some players used their characters to torture or kill suffragettes. (One player’s character, for example, feeds a suffragette to an alligator.) In this essay, I seek to explain the moral disquiet – the intuition that things are awry from the moral perspective – that the players’ actions (call them, for short, ‘assaulting suffragettes’) provoke. The explanation will be an exercise in ‘moral psychology, philosophical not psychological’:[1] I seek not to causally explain our disquiet through the science of human nature, but to explain why things are indeed awry, and thus justify our disquiet.
My intention in posing the question in this way is to leave open the possibilities that our disquiet is justified although the players’ actions are not wrong, or that it’s justified but not principally by the wrongness of the players’ actions. These possibilities are neglected by previous discussions of virtual wrongdoing that ask: is this or that kind of virtual wrongdoing wrong? Indeed, I argue that some common arguments for the wrongness of virtual wrongdoing do not succeed in explaining our disquiet, and sketch a more plausible account of why virtual wrongdoing is morally disquieting insofar as it is, which invokes not the wrongness of the players’ actions but what these actions reveal about the players. By ‘virtual wrongdoing’ I mean an action by a player in the real world that intentionally brings about an action φV by a character in a virtual world V such that φV is wrong-in-V; and the criteria for evaluating an action’s wrongness-in-V are the same as those for evaluating an action’s wrongness in the real world.[2] Continue reading
Cross Post: Privacy is a Collective Concern: When We Tell Companies About Ourselves, We Give Away Details About Others, Too.
BY CARISSA VÉLIZ
This article was originally published in New Statesman America

GETTY IMAGES / JUSTIN SULLIVAN
People often give a personal explanation of whether they protect the privacy of their data. Those who don’t care much about privacy might say that they have nothing to hide. Those who do worry about it might say that keeping their personal data safe protects them from being harmed by hackers or unscrupulous companies. Both positions assume that caring about and protecting one’s privacy is a personal matter. This is a common misunderstanding.
It’s easy to assume that because some data is “personal”, protecting it is a private matter. But privacy is both a personal and a collective affair, because data is rarely used on an individual basis. Continue reading
Making Ourselves Better
Written by Stephen Rainey
Human beings are sometimes seen as uniquely capable of enacting life plans and controlling our environment. Take technology, for instance; with it we make the world around us yield to our desires in various ways. Communication technologies, and global transport, for example, have the effect of practically shrinking a vast world, making hitherto impossible coordination possible among a global population. This contributes to a view of human-as-maker, or ‘homo faber‘. But taking such a view can risk minimising human interests that ought not to be ignored.
Homo faber is a future-oriented, adaptable, rational animal, whose efforts are aligned with her interests when she creates technology that enables a stable counteraction of natural circumstance. Whereas animals are typically seen to have well adapted responses to their environment, honed through generations of adaptation, human beings appear to have instead a general and adaptable skill that can emancipate them from material, external circumstances. We are bad at running away from danger, for instance, but good at building barriers to obviate the need to run. The protections this general, adaptable skill offer are inherently future-facing: humans seem to seek not to react to, but to control the environment.
Recent Comments