Simulate Your True Self
Written by Muriel Leuenberger
A modified version of this post is forthcoming in Think edited by Stephen Law.
Spoiler warning: if you want to watch the movie Don’t Worry Darling, I advise you to not read this article beforehand (but definitely read it afterwards).
One of the most common reoccurring philosophical thought experiments in movies must be the simulation theory. The Matrix, The Truman Show, and Inception are only three of countless movies following the trope of “What if reality is a simulation?”. The most recent addition is Don’t Worry Darling by Olivia Wilde. In this movie, the main character Alice discovers that her idyllic 1950s-style housewife life in the company town of Victory, California, is a simulation. Some of the inhabitants of Victory (most men) are aware of this, such as her husband Jack who forced her into the simulation. Others (most women) share Alice’s unawareness. In the course of the movie, Alice’s memories of her real life return, and she manages to escape the simulation. This blog post is part of a series of articles in which Hazem Zohny, Mette Høeg, and I explore ethical issues connected to the simulation theory through the example of Don’t Worry Darling.
One question we may ask is whether living in a simulation, with a simulated and potentially altered body and mind, would entail giving up your true self or if you could come closer to it by freeing yourself from the constraints of reality. What does it mean to be true to yourself in a simulated world? Can you be real in a fake world with a fake body and fake memories? And would there be any value in trying to be authentic in a simulation?
There Is No Such Thing As A Purely Logical Argument
Written By Mette Leonard Høeg
This blogpost is a prepublication draft of an article forthcoming in THINK.
It is well-known that rational insight and understanding of scientific facts do not necessarily lead to psychological change and shifts in intuitions. In his paper “Grief and the inconsolation of philosophy” (unpublished manuscript), Dominic Wilkinson sheds light on this gap between insight and emotions as he considers the potential of philosophy for offering consolation in relation to human mortality. More specifically, he looks at the possibility of Derek Parfit’s influential reductionist definition of personal identity for providing psychological consolation in the face of the death of oneself and of others. In Reasons and Persons, Parfit argues that personal identity is reducible to physical and psychological continuity of mental states, and that there is no additional fact, diachronic entity or essence that determines identity; and he points to the potential for existential liberation and consolation in adopting this anti-essentialist perspective: “Is the truth depressing? Some might find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air.”
Nudges and Incomplete Preferences
Written by Sarah Raskoff
(Post is based on my recently published paper in Bioethics)
Nudges are small changes in the presentation of options that make a predictable impact on people’s decisions. Proponents of nudges often claim that they are justified as paternalistic interventions that respect autonomy: they lead people to make better choices, while still allowing them to choose for themselves. A classic example is changing the location of food items in a cafeteria so that healthier choices are more salient. The salience of healthy foods predictably leads people to select them, even though they are still free to select the unhealthy options, too.
Nudges have become increasingly popular, but there are many objections to their widespread use. Some allege that nudges do not actually benefit people, while others suspect that they do not really respect autonomy. Although there are many ways of making sense of this latter concern, in a recent paper, I develop a new version of this objection, which takes as its starting point the observation that people often have incomplete preferences. Continue reading
What is the Most Important Question in Ethics?
by Roger Crisp
It’s often been said (including by Socrates) that the most important, ultimate, or fundamental question in ethics is: ‘How should one live?’. Continue reading
Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality
By Maximilian Kiener. First published on the Public Ethics Blog
AI, Today and Tomorrow
77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars. And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.
Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.
There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI. Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.
Fracking and the Precautionary Principle
Image> Leolynn11, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons
The UK Government has lifted the prohibition on fracking.
The risks associated with fracking have been much discussed. There is widespread agreement that earthquakes cannot be excluded.
The precautionary principle springs immediately to mind. There are many iterations of this principle. The gist of the principle, and the gist of the objections to it, are helpfully summarised as follows:
In the regulation of environmental, health and safety risks, “precautionary principles” state, in their most stringent form, that new technologies and policies should be rejected unless and until they can be shown to be safe. Such principles come in many shapes and sizes, and with varying degrees of strength, but the common theme is to place the burden of uncertainty on proponents of potentially unsafe technologies and policies. Critics of precautionary principles urge that the status quo itself carries risks, either on the very same margins that concern the advocates of such principles or else on different margins; more generally, the costs of such principles may outweigh the benefits.
Whichever version of the principle one adopts, it seems that the UK Government’s decision falls foul of it. Even if one accepts (controversially) that the increased flow of gas from fracking will not in itself cause harm (by way of climate disruption), it seems impossible to say that any identifiable benefit from the additional gas (which could only be by way of reduced fuel prices) clearly outweighs the potential non-excludable risk from earthquakes (even if that risk is very small).
If that’s right, can the law do anything about it? Continue reading
Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists
by Anders Sandberg – Future of Humanity Institute, University of Oxford
Is there a future for humans in art? Over the last few weeks the question has been loudly debated online, as machine learning did a surprise charge into making pictures. One image won a state art fair. But artists complain that the AI art is actually a rehash of their art, a form of automated plagiarism that threatens their livelihood.
How do we ethically navigate the turbulent waters of human and machine creativity, business demands, and rapid technological change? Is it even possible?
In Defense of Obfuscation
Written by Mette Leonard Høeg
At the What’s the Point of Moral Philosophy congress held at the University of Oxford this summer, there was near-consensus among the gathered philosophers that clarity in moral philosophy and practical ethics is per definition good and obscurity necessarily bad. Michael J. Zimmerman explicitly praised clarity and accessibility in philosophical writings and criticised the lack of those qualities in especially continental philosophy, using some of Sartre’s more recalcitrant writing as a cautionary example (although also conceding that a similar lack of coherence can occasionally be found in analytical philosophy too). This seemed to be broadly and whole-heartedly supported by the rest of the participants.
We Need To Have A Conversation About “We Need To Have A Conversation”
By Ben Davies
When new technologies emerge, ethical questions inevitably arise about their use. Scientists with relevant expertise will be invited to speak on radio, on television, and in newspapers (sometimes ethicists are asked, too, but this is rarer). In many such cases, a particular phrase gets used when the interview turns to potential ethical issues:
“We need to have a conversation”.
It would make for an interesting qualitative research paper to analyse media interviews with scientists to see how often this phrase comes up (perhaps it seems more prevalent to me than it really is because I’ve become particularly attuned to it). Having not done that research, my suggestion that this is a common response should be taken with a pinch of salt. But it’s undeniably a phrase that gets trotted out. And I want to suggest that there are at least two issues with it. Neither of these issues is necessarily tied together with using this phrase—it’s entirely possible to use it without raising either—but they arise frequently.
In keeping with the stereotype of an Anglophone philosopher, I’m going to pick up on a couple of key terms in a phrase and ask what they mean. First, though, I’ll offer a brief, qualified defence of this phrase. My aim in raising these issues isn’t to attack scientists who use it, but rather to ask that a bit more thought is put into what is, at heart, a reasonable response to ethical complexity.
Awareness of a Nudge is not Required for Resistance of a Nudge
Written by Gabriel De Marco and Thomas Douglas
This blog post is based on our forthcoming paper: “Nudge Transparency is not Required for Nudge Resistibility,” Ergo.
Consider the following cases:
Food Placement. In order to encourage healthy eating, cafeteria staff place healthy food options at eye-level, whereas unhealthy options are placed lower down. Diners are more likely to pick healthy foods and less likely to pick unhealthy foods than they would have been had foods instead been distributed randomly.
Default Registration. In application forms for a driver’s license, applicants are asked whether they wish to be included in the organ donation registry. In order to opt out, one needs to tick a box; otherwise, the applicant will be registered as an organ donor. The form was designed in this way in order to recruit more organ donors; applicants are more likely to be registered than they would have been had the default been not being included in the registry.
Interventions like these two are often called nudges. Though many agree that it is, at least sometimes ethically OK to nudge people, there is a thriving debate about when, exactly, it is OK.
Some authors have suggested that nudging is ethically acceptable only when (or because) the nudge is easy to resist. But what does it take for a nudge to be easy to resist? Authors rarely give accounts of this, yet they often seem to assume what we call the Awareness Condition (AC):
AC: A nudge is easy to resist only if the agent can easily become aware of it.
We think AC is false. In our forthcoming paper, we mount a more developed argument for this, but in this blog post, we simply consider one counterexample to it, and one response to it.
Recent Comments