Brain Cells, Slime Mould, and Sentience Semantics

Recent media reports have highlighted a study suggesting that so-called “lab grown brain cells” can “play the video game ‘Pong’”. Whilst the researchers have described the system as ‘sentient’, others have maintained that we should use the term ”thinking system” to describe the system that the researchers created.

Does it matter whether we describe this as a thinking system, or a sentient one?

Continue reading

Announcement: National Oxford Uehiro Prize in Practical Ethics Now Open For Entries

NATIONAL OXFORD UEHIRO PRIZE IN PRACTICAL ETHICS 2023
• All graduate and undergraduate students (full and part-time) currently enrolled at any UK university, in any subject, are invited to enter the National Oxford Uehiro Prize in Practical Ethics by submitting an essay of up to 2000 words on any topic relevant to practical ethics.
• Two undergraduate papers and two graduate papers will be shortlisted from those submitted to go forward to a public presentation and discussion, where the winner of each category will be selected.
• The winner from each category will receive a prize of £500, and the runner up £200. Revised versions of the two winning essays will be considered for publication in the Journal of Practical Ethics. The two winners from the prize will be invited to take part in an online Q&A, as part of the Oxford Uehiro Festival of Arguments.
• To enter, please submit your written papers by the end of Tuesday 7th February 2023 to rocci.wilkinson@philosophy.ox.ac.uk. Finalists will be notified by Tuesday 21st February of selection. The public presentation will take place on Tuesday 14th March, from 5:30pm. Please save this presentation date, as you will need to attend if selected as a finalist.
Detailed instructions are available here 

What is the Most Important Question in Ethics?

by Roger Crisp

It’s often been said (including by Socrates) that the most important, ultimate, or fundamental question in ethics is: ‘How should one live?’. Continue reading

New issue of the Journal of Practical Ethics – Volume 10 Issue 1

We are pleased to announce the publication of Volume 10 Issue 1 of the Journal of Practical Ethics, our open access journal on moral and political philosophy. You can read our complete open access archive online and hard copies will be available to be purchased at cost price shortly.

Anderson, E. S., (2022) “Can We Talk?: Communicating Moral Concern in an Era of Polarized Politics”, Journal of Practical Ethics 10(1). doi: https://doi.org/10.3998/jpe.1180

Renzo, M., (2022) “Defective Normative Powers: The Case of Consent”, Journal of Practical Ethics 10(1). doi: https://doi.org/10.3998/jpe.2382

Hosein, A., (2022) “Illusions of Control”, Journal of Practical Ethics 10(1). doi: https://doi.org/10.3998/jpe.2381

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality

By Maximilian Kiener. First published on the Public Ethics Blog

AI, Today and Tomorrow

77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars.  And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an ‘intelligence explosion’. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.

Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.

There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazon’s Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI.  Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.

Continue reading

Fracking and the Precautionary Principle

By Charles Foster

Image> Leolynn11, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

The UK Government has lifted the prohibition on fracking.

The risks associated with fracking have been much discussed. There is widespread agreement that earthquakes cannot be excluded.

The precautionary principle springs immediately to mind. There are many iterations of this principle. The gist of the principle, and the gist of the objections to it, are helpfully summarised as follows:

In the regulation of environmental, health and safety risks, “precautionary principles” state, in their most stringent form, that new technologies and policies should be rejected unless and until they can be shown to be safe. Such principles come in many shapes and sizes, and with varying degrees of strength, but the common theme is to place the burden of uncertainty on proponents of potentially unsafe technologies and policies. Critics of precautionary principles urge that the status quo itself carries risks, either on the very same margins that concern the advocates of such principles or else on different margins; more generally, the costs of such principles may outweigh the benefits. 

Whichever version of the principle one adopts, it seems that the UK Government’s decision falls foul of it. Even if one accepts (controversially) that the increased flow of gas from fracking will not in itself cause harm (by way of climate disruption), it seems impossible to say that any identifiable benefit from the additional gas (which could only be by way of reduced fuel prices) clearly outweighs the potential non-excludable risk from earthquakes (even if that risk is very small).

If that’s right, can the law do anything about it? Continue reading

Guest Post: Dear Robots, We Are Sorry

Written by Stephen Milford, PhD

Institute for Biomedical Ethics, Basel University

 

The rise of AI presents humanity with an interesting prospect: a companion species. Ever since our last hominid cousins went extinct from the island of Flores almost 12,000 years ago, homo Sapiens have been alone in the world.[i] AI, true AI, offers us the unique opportunity to regain what was lost to us. Ultimately, this is what has captured our imagination and drives our research forward. Make no mistake, our intentions with AI are clear: artificial general intelligence (AGI). A being that is like us, a personal being (whatever person may mean).

If any of us are in any doubt about this, consider Turing’s famous test. The aim is not to see how intelligent the AI can be, how many calculations it performs, or how it shifts through data. An AI will pass the test if it is judged by a person to be indistinguishable from another person. Whether this is artificial or real is academic, the result is the same; human persons will experience the reality of another person for the first time in 12 000 years, and we are closer now than ever before. Continue reading

Protecting Children or Policing Gender?

Laws on genital mutilation, gender affirmation and cosmetic genital surgery are at odds. The key criteria should be medical necessity and consent.

By Brian D. Earp (@briandavidearp)

———————-

In Ohio, USA, lawmakers are currently considering the Save Adolescents from Experimentation (SAFE) Act that would ban hormones or surgeries for minors who identify as transgender or non-binary. In April this year, Alabama passed similar legislation.

Alleging anti-trans prejudice, opponents of such legislation say these bans will stop trans youth from accessing necessary healthcare, citing guidance from the American Psychiatric Association, the American Medical Association and the American Academy of Pediatrics.

Providers of gender-affirming services point out that puberty-suppressing medications and hormone therapies are considered standard-of-care for trans adolescents who qualify. Neither is administered before puberty, with younger children receiving psychosocial support only. Meanwhile genital surgeries for gender affirmation are rarely performed before age 18.

Nevertheless, proponents of the new laws say they are needed to protect vulnerable minors from understudied medical risks and potentially lifelong bodily harms. Proponents note that irreversible mastectomies are increasingly performed before the age of legal majority.

Republican legislators in several states argue that if a child’s breasts or genitalia are ‘healthy’, there is no medical or ethical justification to use hormones or surgeries to alter those parts of the body.

However, while trans adolescents struggle to access voluntary services and rarely undergo genital surgeries prior to adulthood, non-trans-identifying children in the United States and elsewhere are routinely subjected to medically unnecessary surgeries affecting their healthy sexual anatomy — without opposition from conservative lawmakers.

Continue reading

Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists

Stable diffusion image, prompt: "Reflective equilibrium in a turbulent lake. Painting by Greg Rutkowski" by Anders Sandberg – Future of Humanity Institute, University of Oxford

Is there a future for humans in art? Over the last few weeks the question has been loudly debated online, as machine learning did a surprise charge into making pictures. One image won a state art fair. But artists complain that the AI art is actually a rehash of their art, a form of automated plagiarism that threatens their livelihood.

How do we ethically navigate the turbulent waters of human and machine creativity, business demands, and rapid technological change? Is it even possible?

Continue reading

Guest Post: The Ethics of the Insulted—Salman Rushdie’s Case

Written by Hossein Dabbagh – Philosophy Tutor at Oxford University

hossein.dabbagh@conted.ox.ac.uk

 

We have the right, ceteris paribus, to ridicule a belief (its propositional content), i.e., harshly criticise it. If someone, despite all evidence, for instance, believes with certainty that no one can see him when he closes his eyes, we might be justified to practice our right to ridicule his belief. But if we ridicule a belief in terms of its propositional content (i.e., “what ridiculous proposition”), don’t we thereby “insult” anyone who holds the belief by implying that they must not be very intelligent? It seems so. If ridiculing a belief overlaps with insulting a person by virtue of their holding that belief, an immediate question would arise: Do we have the right to insult people in the sense of expressing a lack of appropriate regard for the belief-holder? Sometimes, at least. Some people might deserve to be insulted on the basis of the beliefs they hold or express—for example, politicians who harm the public with their actions and speeches. However, things get complicated if we take into consideration people’s right to live with respect, i.e., free from unwarranted insult. We seem to have two conflicting rights that need to be weighed against each other in practice. The insulters would only have the right to insult, as a pro tanto right, if this right is not overridden by the weightier rights that various insultees (i.e., believers) may have. Continue reading

Authors

Affiliations