How Brain-to-Brain Interfaces Will Make Things Difficult for Us
Written by David Lyreskog
A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes. Continue reading
Cross Post: When Can You Refuse to Rescue?
Written by Theron Pummer
This article originally appeared in the OUPBlog
You can save a stranger’s life. Right now, you can open a new tab in your internet browser and donate to a charity that reliably saves the lives of people living in extreme poverty. Don’t have the money? Don’t worry—you can give your time instead. You can volunteer, organize a fundraiser, or earn money to donate. Be it using money or time, there are actions you can take now that will save lives. And it’s not just now. You can expect to face such opportunities to help strangers pretty much constantly over the remainder of your life.
I doubt you are morally required to help distant strangers at every opportunity, taking breaks only for food and sleep. Helping that much would be enormously costly. It would involve a lifetime of sacrificing your well-being, freedom, relationships, and personal projects. But even if you are not required to go that far, surely there are some significant costs you are required to incur over the course of your life, to prevent serious harms to strangers. Continue reading
Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality
By Maximilian Kiener. First published on the Public Ethics Blog
AI, Today and Tomorrow
77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars. And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.
Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.
There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI. Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.
Oxford Uehiro Prize in Practical Ethics: When Money Can’t Buy Happiness: Does Our Duty to Assist the Needy Require Us to Befriend the Lonely?
This article received an honourable mention in the undergraduate category of the 2022 National Oxford Uehiro Prize in Practical Ethics
Written by Lukas Joosten, University of Oxford
While most people accept some duty to assist to the needy, few accept a similar duty to befriend the lonely. In this essay I will argue that this position is inconsistent since most conceptions of a duty to assist entail a duty to befriend the lonely[1]. My main argument in this essay will follow from two core insights about friendship: friendship cannot be bought like other crucial goods, and friendship is sufficiently important to happiness that we are morally required to address friendlessness in others. The duty to friend, henceforth D2F, refers to a duty to befriend chronically lonely individuals. I present this argument by first presenting a broad conception of the duty to assist, explain how this broad conception entails a duty to friend, and then test my argument to various objections. Continue reading
The ABC of Responsible AI
Written by Maximilian Kiener
Amazon’s Alexa recently told a ten-year-old girl to touch a live plug with a penny, encouraging the girl to do what could potentially lead to severe burns or even the loss of an entire limb.[1] Fortunately, the girl’s mother heard Alexa’s suggestion, intervened, and made sure her daughter stayed safe.
But what if the girl had been hurt? Who would have been responsible: Amazon for creating Alexa, the parents for not watching their daughter, or the licensing authorities for allowing Alexa to enter the market?
Oxford Uehiro Prize in Practical Ethics: Against Making a Difference
This essay was the winning entry in the undergraduate category of the 7th Annual Oxford Uehiro Prize in Practical Ethics.
Written by University of Oxford student Imogen Rivers
I. The Complacency Argument
Some of the most serious wrongs are produced collectively. Can individuals bear moral responsibility for such outcomes? Suggestively, it’s been argued that “all who participate by their actions in processes that produce injustice [e.g. “sweatshop” labour] share responsibility for its remedy”;[1] “citizens… bear partial responsibility for the election outcome. Even if an individual’s vote is not decisive for a given candidate’s victory”;[2] “those who contribute to climate change… (by using… excessive… fossil fuels or by deforestation) should make amends”.[3]
However there’s a prevalent defence: it makes no (significant) difference if I do it. For example, “global warming will still occur even if I do not drive [my “gas-guzzler”] just for fun”;[4] “my polluting doesn’t actually harm anyone, since it doesn’t make a difference to anyone’s health”;[5] “why [should citizens] vote even if… each particular vote does not make a difference to the outcome”?;[6] “British officials… dismiss suggestions that our role on the ground in Saudi Arabia makes any difference [to targeting Yemeni civilians]”.[7] Continue reading
Is Addiction an Expression of One’s Deep Self?
By Doug McConnell
Chandra Sripada (2016) has recently proposed a conative self-expression account of moral responsibility which claims that we are responsible for actions motivated by what we care for and not responsible for actions motivated solely by other desires. He claims that this account gives us the intuitively correct answers when used to assess the responsibility of Harry Frankfurt’s Willing Addict and Unwilling Addict. This might be true; however, I argue that it provides a counterintuitive assessment of real-world cases of addiction because it holds people struggling to recover morally responsible for their relapses. Continue reading
Moral Responsibility and Interventions
Written by Gabriel De Marco
Consider a story about Joe, Louie, and Dr. White. Joe is a gambling man and has been for much of his life. In his late twenties, Joe began to gamble occasionally and after a while, he decided that he would embrace this practice of gambling. Although Joe gambles fairly often, he has his limits, and can often resist the desire to gamble.
Louie, on the other hand, is a frugal family man. With his wife, he has been saving money over the last year so that they can take their kids to Disneyland. Dr. White, an evil neurosurgeon who detests the thought of children enjoying themselves at Disneyland, wants to stop this trip. So, Dr. White designs and executes a plan. One night, while Louie is sleeping, Dr. White uses his fancy neuroscientific methods to make Louie more like Joe. He implants in Louie a strong desire to gamble, as well as further attitudes that will help Louie embrace this desire, such that Louie, for example, now values the thrill of gambling, and he desires that his gambling desires are the ones that lead him to action. In order to increase chances of success, Dr. White also significantly weakens some of Louie’s competing attitudes, like some of his family values, or his attitudes towards frugality. When Joe wakes up the next morning, he feels this strong desire to gamble, and although he finds it strange that it has come out of the blue, he fully embraces it (as much as Joe embraces his own gambling desires), having recognized that it lines up with some of his other attitudes about his desires (which were also implanted). Later in the day, while he is “out running errands,” Louie swings by a casino, bets the money he has been saving for the trip, and loses it. “Great success” thinks Dr. White. Since his goal of preventing some children’s joy at Disneyland has been achieved, he turns Louie back into his old self after Louie goes to sleep.
This story is similar to stories sometimes found in the debate about freedom and moral responsibility, though I will focus on moral responsibility. Intuitively, Louie is not morally responsible for gambling away these savings; or, at the very least, he is significantly less responsible for doing so than someone like Joe would be for doing something similar. If we want to make sense of these different judgments about Louie and Joe’s responsibility, we are going to need to find some difference between them that can explain why Louie is, at least, less responsible than regular Joe.
Recent Comments