Cross Post

Cross Post: Privacy is a Collective Concern: When We Tell Companies About Ourselves, We Give Away Details About Others, Too.

BY CARISSA VÉLIZ

This article was originally published in New Statesman America

Cross Post: Is Mandatory Vaccination the Best Way to Tackle Falling Rates of Childhood Immunisation?

Written by Dr Alberto Giubilini and Dr Samantha Vanderslott

This article was originally published on the Oxford Martin School website.

Following the publication of figures showing UK childhood vaccination rates have fallen for the fifth year in a row, researchers from the Oxford Martin Programme on Collective Responsibility for Infectious Disease discuss possible responses.

Alberto Giubilini: Yes, “we need to be bold” and take drastic measures to increase vaccination uptake

In response to the dramatic fall in vaccination uptake in the UK, Health Secretary Matt Hancock has said that “we need to be bold” and that he “will not rule out action so that every child is properly protected”. This suggests that the Health Secretary is seriously considering some form of mandatory vaccination program or some form of penalty for non-vaccination, as is already the case in other countries, such as the US, Italy, France, or Australia. It is about time the UK takes action to ensure that individuals fulfil their social responsibility to protect not only their own children, but also other people, from infectious disease, and more generally to make their fair contribution to maintaining a good level of public health. Continue reading

The Ethics of Social Prescribing: An Overview

Written by Rebecca Brown, Stephanie Tierney, Amadea Turk.

This post was originally published on the NIHR School for Primary Care Research website which can be accessed here

Health problems often co-occur with social and personal factors (e.g. isolation, debt, insecure housing, unemployment, relationship breakdown and bereavement). Such factors can be particularly important in the context of non-communicable diseases (NCDs), where they might contribute causally to disease, or reduce that capacity of patients to self-manage their conditions (leading to worse outcomes). This results in the suffering of individuals and a greater burden being placed on healthcare resources.

A potential point of intervention is at the level of addressing these upstream contributors to poor health. A suggested tool – gaining momentum amongst those involved in health policy – is the use of ‘social prescribing’. Social prescribing focuses on addressing people’s non-medical needs, which it is hoped will subsequently reduce their medical needs. In primary care, social prescribing can take a range of forms. For example, it may involve upskilling existing members of staff (e.g. receptionists) to signpost patients to relevant local assets (e.g. organisations, groups, charities) to address their non-medical needs. It is also becoming common for GPs to refer patients (or people may self-refer) to a link worker (sometimes called a care navigator) who can work with them to identify their broader social and personal needs. Together, they then develop a plan for how those needs could be met through engagement with activities, services or events in the local community. The resources that link workers direct people towards are often run by voluntary organisations and might include, among other things, sports groups, arts and crafts, drama, gardening, cookery, volunteering, housing advice, debt management, and welfare rights.

Supporting people to establish more stable and fulfilling social lives whilst at the same time reducing healthcare costs seems like a win-win. However, it is essential to evaluate the justifications for the introduction of social prescribing schemes, including their effectiveness. This raises a number of complicating factors, including some questions that require not just a consideration of empirical evidence, but a commitment to certain philosophical and ethical positions.

Continue reading

Cross Post: Ten Ethical Flaws in the Caster Semenya Decision on Intersex in Sport

Written by Julian Savulescu, University of Oxford

File 20190508 183103 1eva5jd.jpg?ixlib=rb 1.1
Caster Semenya is legally female, was from birth raised as female and identifies as a female.
Jon Connell on flickr , CC BY-NC

Middle-distance runner Caster Semenya will need to take hormone-lowering agents, or have surgery, if she wishes to continue her career in her chosen athletic events.

The Court of Arbitration in Sport (CAS) decided last week to uphold a rule requiring athletes with certain forms of what they call “disorders of sex development” (DSD) – more commonly called “intersex” conditions – to lower their testosterone levels in order to still be eligible to compete as women in certain elite races.

The case was brought to CAS by Semenya, as she argued discrimination linked to a 2018 decision preventing some women, including herself, from competing in some female events.

This ruling is flawed. On the basis of science and ethical reasoning, there are ten reasons CAS’s decision does not stand up. Continue reading

Cross Post: Why No-Platforming is Sometimes a Justifiable Position

Written by Professor Neil Levy

Originally published in Aeon Magazine

The discussion over no-platforming is often presented as a debate between proponents of free speech, who think that the only appropriate response to bad speech is more speech, and those who think that speech can be harmful. I think this way of framing the debate is only half-right. Advocates of open speech emphasise evidence, but they overlook the ways in which the provision of a platform itself provides evidence.

No-platforming is when a person is prevented from contributing to a public debate, either through policy or protest, on the grounds that their beliefs are dangerous or unacceptable. Open-speech advocates highlight what we might call first-order evidence: evidence for and against the arguments that the speakers make. But they overlook higher-order evidence. Continue reading

Cross Post: Biased Algorithms: Here’s a More Radical Approach to Creating Fairness

Written by Dr Tom Douglas

File 20190116 163283 1s61b5v.jpg?ixlib=rb 1.1

Our lives are increasingly affected by algorithms. People may be denied loans, jobs, insurance policies, or even parole on the basis of risk scores that they produce.

Yet algorithms are notoriously prone to biases. For example, algorithms used to assess the risk of criminal recidivism often have higher error rates in minority ethic groups. As ProPublica found, the COMPAS algorithm – widely used to predict re-offending in the US criminal justice system – had a higher false positive rate in black than in white people; black people were more likely to be wrongly predicted to re-offend.

Corrupt code.
Vintage Tone/Shutterstock

Continue reading

Cross Post: Fresh Urgency in Mapping Out Ethics of Brain Organoid Research

File 20181120 161641 npf87x.jpg?ixlib=rb 1.1

Written by Julian Koplin, University of Melbourne and

Julian Savulescu, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Researchers have grown groups of brain cells in the lab –
known as ‘organoids’ – that produce brain waves resembling
those found in premature infants.
from www.shutterstock.com

 

Scientists have become increasingly adept at creating brain organoids – which are essentially miniature human brains grown in the laboratory from stem cells.

Although brain organoid research might seem outlandish, it serves an important moral purpose. Among other benefits, it promises to help us understand early brain development and neurodevelopmental disorders such as microcephaly, autism and schizophrenia.

Continue reading

Cross Post: What If Banks Were the Main Protectors of Customers’ Private Data?

Written by Carissa Véliz

Dr Carissa Véliz, Oxford Uehiro Centre research fellow, has recently published a provocative article in the Harvard Business Review:

The ability to collect and exploit consumers’ personal data has long been a source of competitive advantage in the digital economy. It is their control and use of this data that has enabled the likes of Google, Amazon, Alibaba, and Facebook to dominate online markets.

But consumers are increasingly concerned about the vulnerability that comes with surrendering data. A growing number of cyberattacks — the 2017 hacking of credit watch company Experian being a case in point, not to mention the likely interference by Russian government sponsored hackers in the 2016 US Presidential elections — have triggered something of a “techlash”.

Even without these scandals, it is likely that sooner or later every netizen will have suffered at some point from a bad data experience: from their credit card number being stolen, to their account getting hacked, or their personal details getting exposed; from suffering embarrassment from an inappropriate ad while at work, to realizing that their favorite airline is charging them more than they charge others for the same flight.

See here for the full article, and to join in the conversation.

Why It’s Important to Test Drugs on Pregnant Women

By Mackenzie Graham

Crosspost from The Conversation. Click here to read the full article.

The development of accessible treatment options for pregnant women is a significant public health issue. Yet, very few medications are approved for use during pregnancy. Most drug labels have little data to inform prescribing decisions. This means that most medicines taken during pregnancy are used without data to guide safe and effective dosing.

The United States Food and Drug Administration recently published draft ethical guidelines for how and when to include pregnant women in drug development clinical trials. These guidelines call for “the judicious inclusion of pregnant women in clinical trials and careful attention to potential foetal risk”. The guidelines also distinguish between risks that are related to the research and those that are not, and the appropriate level of risk to which a foetus might be exposed. Continue reading

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz

Crosspost from Slate.  Click here to read the full article

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

This article was originally published on Slate.  To read the full article and to join in the conversation please follow this link.

Authors

Subscribe Via Email

Affiliations