Skip to content

moral responsibility

Guest Post: Oppenheimer – Not The Morality Of The bomb

Written by Martin Sand & Karin Jongsma

The recently released Christopher Nolan movie “Oppenheimer” proves to be a phenomenal movie that deserves being watched on screen. Despite its 3 hours length, “Oppenheimer” is an intriguing portrayal of a genius, albeit somewhat narcissistic character, who – in the second half of the movie – seemingly regrets being involved in the development and deployment of the atomic bomb. “Oppenheimer” is much more than a biography of a memorable scientist; it’s a tale of the complex relation between science and politics, and the complexity of moral decision-making in an uncertain world faced with unprecedented suffering and cruelty. It provides insights into how the political climate in the “era of ideologies” (Karl Dietrich Bracher) could make it difficult for scientists to have left-leaning views, while pursuing successful scientific careers in the US. Those times and experiences are worth recollecting, also for ongoing discussions about censorship and academic freedom.Read More »Guest Post: Oppenheimer – Not The Morality Of The bomb

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

Read More »Stay Clear of the Door

How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Written by David Lyreskog

Four images depicting ‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney.
‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney

 

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.Read More »How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Cross Post: When Can You Refuse to Rescue?

  • by

Written by Theron Pummer

This article originally appeared in the OUPBlog

 You can save a stranger’s life. Right now, you can open a new tab in your internet browser and donate to a charity that reliably saves the lives of people living in extreme poverty. Don’t have the money? Don’t worry—you can give your time instead. You can volunteer, organize a fundraiser, or earn money to donate. Be it using money or time, there are actions you can take now that will save lives. And it’s not just now. You can expect to face such opportunities to help strangers pretty much constantly over the remainder of your life.

I doubt you are morally required to help distant strangers at every opportunity, taking breaks only for food and sleep. Helping that much would be enormously costly. It would involve a lifetime of sacrificing your well-being, freedom, relationships, and personal projects. But even if you are not required to go that far, surely there are some significant costs you are required to incur over the course of your life, to prevent serious harms to strangers.Read More »Cross Post: When Can You Refuse to Rescue?

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality

By Maximilian Kiener. First published on the Public Ethics Blog

AI, Today and Tomorrow

77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars.  And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.

Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.

There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI.  Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.

Read More »Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality

Oxford Uehiro Prize in Practical Ethics: When Money Can’t Buy Happiness: Does Our Duty to Assist the Needy Require Us to Befriend the Lonely?

  • by

This article received an honourable mention in the undergraduate category of the 2022 National Oxford Uehiro Prize in Practical Ethics

Written by Lukas Joosten, University of Oxford

While most people accept some duty to assist to the needy, few accept a similar duty to befriend the lonely. In this essay I will argue that this position is inconsistent since most conceptions of a duty to assist entail a duty to befriend the lonely[1]. My main argument in this essay will follow from two core insights about friendship: friendship cannot be bought like other crucial goods, and friendship is sufficiently important to happiness that we are morally required to address friendlessness in others. The duty to friend, henceforth D2F, refers to a duty to befriend chronically lonely individuals. I present this argument by first presenting a broad conception of the duty to assist, explain how this broad conception entails a duty to friend, and then test my argument to various objections.Read More »Oxford Uehiro Prize in Practical Ethics: When Money Can’t Buy Happiness: Does Our Duty to Assist the Needy Require Us to Befriend the Lonely?

The ABC of Responsible AI

Written by Maximilian Kiener

 

Amazon’s Alexa recently told a ten-year-old girl to touch a live plug with a penny, encouraging the girl to do what could potentially lead to severe burns or even the loss of an entire limb.[1] Fortunately, the girl’s mother heard Alexa’s suggestion, intervened, and made sure her daughter stayed safe.

But what if the girl had been hurt? Who would have been responsible: Amazon for creating Alexa, the parents for not watching their daughter, or the licensing authorities for allowing Alexa to enter the market?

Read More »The ABC of Responsible AI

Oxford Uehiro Prize in Practical Ethics: Against Making a Difference

  • by

This essay was the winning entry in the undergraduate category of the 7th Annual Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student Imogen Rivers 

I. The Complacency Argument

Some of the most serious wrongs are produced collectively. Can individuals bear moral responsibility for such outcomes? Suggestively, it’s been argued that “all who participate by their actions in processes that produce injustice [e.g. “sweatshop” labour] share responsibility for its remedy”;[1] “citizens… bear partial responsibility for the election outcome. Even if an individual’s vote is not decisive for a given candidate’s victory”;[2] “those who contribute to climate change… (by using… excessive… fossil fuels or by deforestation) should make amends”.[3]

However there’s a prevalent defence: it makes no (significant) difference if I do it. For example, “global warming will still occur even if I do not drive [my “gas-guzzler”] just for fun”;[4] “my polluting doesn’t actually harm anyone, since it doesn’t make a difference to anyone’s health”;[5] “why [should citizens] vote even if… each particular vote does not make a difference to the outcome”?;[6] “British officials… dismiss suggestions that our role on the ground in Saudi Arabia makes any difference [to targeting Yemeni civilians]”.[7] Read More »Oxford Uehiro Prize in Practical Ethics: Against Making a Difference

Is Addiction an Expression of One’s Deep Self?

By Doug McConnell

Chandra Sripada (2016) has recently proposed a conative self-expression account of moral responsibility which claims that we are responsible for actions motivated by what we care for and not responsible for actions motivated solely by other desires. He claims that this account gives us the intuitively correct answers when used to assess the responsibility of Harry Frankfurt’s Willing Addict and Unwilling Addict. This might be true; however, I argue that it provides a counterintuitive assessment of real-world cases of addiction because it holds people struggling to recover morally responsible for their relapses.Read More »Is Addiction an Expression of One’s Deep Self?

Moral Responsibility and Interventions

Written by Gabriel De Marco

Consider a story about Joe, Louie, and Dr. White. Joe is a gambling man and has been for much of his life. In his late twenties, Joe began to gamble occasionally and after a while, he decided that he would embrace this practice of gambling. Although Joe gambles fairly often, he has his limits, and can often resist the desire to gamble.

Louie, on the other hand, is a frugal family man. With his wife, he has been saving money over the last year so that they can take their kids to Disneyland. Dr. White, an evil neurosurgeon who detests the thought of children enjoying themselves at Disneyland, wants to stop this trip. So, Dr. White designs and executes a plan. One night, while Louie is sleeping, Dr. White uses his fancy neuroscientific methods to make Louie more like Joe. He implants in Louie a strong desire to gamble, as well as further attitudes that will help Louie embrace this desire, such that Louie, for example, now values the thrill of gambling, and he desires that his gambling desires are the ones that lead him to action. In order to increase chances of success, Dr. White also significantly weakens some of Louie’s competing attitudes, like some of his family values, or his attitudes towards frugality. When Joe wakes up the next morning, he feels this strong desire to gamble, and although he finds it strange that it has come out of the blue, he fully embraces it (as much as Joe embraces his own gambling desires), having recognized that it lines up with some of his other attitudes about his desires (which were also implanted). Later in the day, while he is “out running errands,” Louie swings by a casino, bets the money he has been saving for the trip, and loses it. “Great success” thinks Dr. White. Since his goal of preventing some children’s joy at Disneyland has been achieved, he turns Louie back into his old self after Louie goes to sleep.

This story is similar to stories sometimes found in the debate about freedom and moral responsibility, though I will focus on moral responsibility. Intuitively, Louie is not morally responsible for gambling away these savings; or, at the very least, he is significantly less responsible for doing so than someone like Joe would be for doing something similar. If we want to make sense of these different judgments about Louie and Joe’s responsibility, we are going to need to find some difference between them that can explain why Louie is, at least, less responsible than regular Joe.

Read More »Moral Responsibility and Interventions