Cross Post: Ten Ethical Flaws in the Caster Semenya Decision on Intersex in Sport
Written by Julian Savulescu, University of Oxford

Jon Connell on flickr , CC BY-NC
Middle-distance runner Caster Semenya will need to take hormone-lowering agents, or have surgery, if she wishes to continue her career in her chosen athletic events.
The Court of Arbitration in Sport (CAS) decided last week to uphold a rule requiring athletes with certain forms of what they call “disorders of sex development” (DSD) – more commonly called “intersex” conditions – to lower their testosterone levels in order to still be eligible to compete as women in certain elite races.
The case was brought to CAS by Semenya, as she argued discrimination linked to a 2018 decision preventing some women, including herself, from competing in some female events.
This ruling is flawed. On the basis of science and ethical reasoning, there are ten reasons CAS’s decision does not stand up. Continue reading
Cross Post: Why No-Platforming is Sometimes a Justifiable Position
Written by Professor Neil Levy
Originally published in Aeon Magazine
The discussion over no-platforming is often presented as a debate between proponents of free speech, who think that the only appropriate response to bad speech is more speech, and those who think that speech can be harmful. I think this way of framing the debate is only half-right. Advocates of open speech emphasise evidence, but they overlook the ways in which the provision of a platform itself provides evidence.
No-platforming is when a person is prevented from contributing to a public debate, either through policy or protest, on the grounds that their beliefs are dangerous or unacceptable. Open-speech advocates highlight what we might call first-order evidence: evidence for and against the arguments that the speakers make. But they overlook higher-order evidence. Continue reading
Cross Post: Biased Algorithms: Here’s a More Radical Approach to Creating Fairness
Written by Dr Tom Douglas

Our lives are increasingly affected by algorithms. People may be denied loans, jobs, insurance policies, or even parole on the basis of risk scores that they produce.
Yet algorithms are notoriously prone to biases. For example, algorithms used to assess the risk of criminal recidivism often have higher error rates in minority ethic groups. As ProPublica found, the COMPAS algorithm – widely used to predict re-offending in the US criminal justice system – had a higher false positive rate in black than in white people; black people were more likely to be wrongly predicted to re-offend.
Corrupt code.
Vintage Tone/Shutterstock
Cross Post: Fresh Urgency in Mapping Out Ethics of Brain Organoid Research
Written by Julian Koplin, University of Melbourne and
Julian Savulescu, University of Oxford
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Researchers have grown groups of brain cells in the lab –
known as ‘organoids’ – that produce brain waves resembling
those found in premature infants.
from www.shutterstock.com
Scientists have become increasingly adept at creating brain organoids – which are essentially miniature human brains grown in the laboratory from stem cells.
Although brain organoid research might seem outlandish, it serves an important moral purpose. Among other benefits, it promises to help us understand early brain development and neurodevelopmental disorders such as microcephaly, autism and schizophrenia.
Cross Post: What If Banks Were the Main Protectors of Customers’ Private Data?
Written by Carissa Véliz
Dr Carissa Véliz, Oxford Uehiro Centre research fellow, has recently published a provocative article in the Harvard Business Review:
The ability to collect and exploit consumers’ personal data has long been a source of competitive advantage in the digital economy. It is their control and use of this data that has enabled the likes of Google, Amazon, Alibaba, and Facebook to dominate online markets.
But consumers are increasingly concerned about the vulnerability that comes with surrendering data. A growing number of cyberattacks — the 2017 hacking of credit watch company Experian being a case in point, not to mention the likely interference by Russian government sponsored hackers in the 2016 US Presidential elections — have triggered something of a “techlash”.
Even without these scandals, it is likely that sooner or later every netizen will have suffered at some point from a bad data experience: from their credit card number being stolen, to their account getting hacked, or their personal details getting exposed; from suffering embarrassment from an inappropriate ad while at work, to realizing that their favorite airline is charging them more than they charge others for the same flight.
See here for the full article, and to join in the conversation.
Why It’s Important to Test Drugs on Pregnant Women
By Mackenzie Graham
Crosspost from The Conversation. Click here to read the full article.
The development of accessible treatment options for pregnant women is a significant public health issue. Yet, very few medications are approved for use during pregnancy. Most drug labels have little data to inform prescribing decisions. This means that most medicines taken during pregnancy are used without data to guide safe and effective dosing.
The United States Food and Drug Administration recently published draft ethical guidelines for how and when to include pregnant women in drug development clinical trials. These guidelines call for “the judicious inclusion of pregnant women in clinical trials and careful attention to potential foetal risk”. The guidelines also distinguish between risks that are related to the research and those that are not, and the appropriate level of risk to which a foetus might be exposed. Continue reading
Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.
Written by Carissa Veliz
Crosspost from Slate. Click here to read the full article
At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.
The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.
The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”
This article was originally published on Slate. To read the full article and to join in the conversation please follow this link.
The ‘Killer Robots’ Are Us
Written by Dr Michael Robillard
In a recent New York Times article Dr Michael Robillard writes: “At a meeting of the United Nations Convention on Conventional Weapons in Geneva in November, a group of experts gathered to discuss the military, legal and ethical dimensions of emerging weapons technologies. Among the views voiced at the convention was a call for a ban on what are now being called “lethal autonomous weapons systems.”
A 2012 Department of Defense directive defines an autonomous weapon system as one that, “once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.” “
Follow this link to read the article in full.
Cross Post: Think Twice Before Sending Facebook Your Nude Photos: The Shadow Brokers’ Disclosures Prove Privacy and Security Are Not a Zero-Sum Game
Written by Dr Carissa Veliz
This article first appeared in El Pais
Time and again, we have been sold the story that we need to give up privacy in exchange for security. According to former NSA security consultant Ed Giorgio, ‘Privacy and security are a zero-sum game’—meaning that for every increase in one, there is a decrease in the other. The go-to argument to justify mass surveillance, then, is that sacrificing our privacy is necessary for government agencies to be able to protect us from the bad guys. Continue reading
Cross Post: Sacred Places and Traditions with Lea Ypi
Suppose a religious community regards a site – with, say, a stone circle – as sacred. It has for centuries been used as a place of prayer and contemplation. The land is owned by the state and they want to sell it off to build apartment blocks. You might think that the deep attachment the religious community has to this place of worship is what gives it some right to protect the site. But Lea Ypi of the London School of Economics, is not so sure.
Lea Ypi’s paper ‘Structural Injustice and the Place of Attachment’, was published in the Journal of Practical Ethics, Vol 5 No.1.
In response to her paper Lea Ypi has been interviewed by David Edmonds for the Philosophy 24/7 podcast series. The podcast is available here on the Philosophy 24/7 website
Lea Ypi is Professor in Political Theory in the Government Department, London School of Economics, and Adjunct Associate Professor in Philosophy at the Research School of Social Sciences, Australian National University. Before joining the LSE, she was a Post-doctoral Prize Research Fellow at Nuffield College (Oxford) and a researcher at the European University Institute where she obtained her PhD. Her website is here.
Recent Comments