admin

Announcement: Vacancy Research Fellow in Applied Moral Philosophy

Applications are invited for a full-time Research Fellow position (Grade 7: £31,604 – £38,883 p.a.) to conduct research in philosophy and applied ethics for the research project: Neurointerventions in Crime Prevention: An Ethical Analysis, which is hosted by the Oxford Uehiro Centre for Practical Ethics within the Faculty of Philosophy.

This post is fixed-term for 1 year from the date of appointment with excellent opportunities for career advancement.

The Fellow will conduct collaborative research under the supervision of Dr Thomas Douglas (Principal Investigator for the research project), with a focus on ‘The Ethics of Environmental and Biological Behavioural Influence in Crime Prevention’, examining the nature and moral status of different kinds of behavioural influence (including coercion, manipulation, nudging and biological intervention).

The Fellow will produce publications of high quality research, undertake literature reviews, and participate in other project activities. This participation may involve developing collaborative relationships, contributing to public engagement activities, grant applications and event planning, and performing other occasional duties such as event organisation, administration and teaching.

The postholder is required to hold the degree of PhD (or equivalent), or be a doctoral candidate near completion, in philosophy or other relevant discipline (such as law or political theory) with specialisation in applied ethics, normative ethics, political philosophy or other related sub-discipline. Also essential are excellent research skills, an outstanding research record, and demonstrated ability to publish in journals in applied ethics, normative ethics, or political philosophy.

Applications are to be submitted no later than 12.00 midday on Friday 8 June 2018. Further details including how to apply

Ethical AI Kills Too: An Assement of the Lords Report on AI in the UK

Hazem Zohny and Julian Savulescu
Cross-posted with the Oxford Martin School

Developing AI that does not eventually take over humanity or turn the world into a dystopian nightmare is a challenge. It also has an interesting effect on philosophy, and in particular ethics: suddenly, a great deal of the millennia-long debates on the good and the bad, the fair and unfair, need to be concluded and programmed into machines. Does the autonomous car in an unavoidable collision swerve to avoid killing five pedestrians at the cost of its passenger’s life? And what exactly counts as unfair discrimination or privacy violation when “Big Data” suggests an individual is, say, a likely criminal?

The recent House of Lords Artificial Intelligence Committee’s report acknowledges the centrality of ethics to AI front and centre. It engages thoughtfully with a wide range of issues: algorithmic bias, the monopolised control of data by large tech companies, the disruptive effects of AI on industries, and its implications for education, healthcare, and weaponry.

Many of these are economic and technical challenges. For instance, the report notes Google’s continued inability to fix its visual identification algorithms, which it emerged three years ago could not distinguish between gorillas and black people. For now, the company simply does not allow users of Google Photos to search for gorillas.

But many of the challenges are also ethical – in fact, central to the report is that while the UK is unlikely to lead globally in the technical development of AI, it can lead the way in putting ethics at the centre of AI’s development and use.

Continue reading

Guest Post: Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu, Medical Student, Monash University

When news broke that Facebook data from 50 million American users had been harvested and misused, and that Facebook had kept silent about it for two years, the 17th of March 2018 became a bad day for the mega-corporation. In the week following what became known as the Cambridge Analytica scandal, Facebook’s market value fell by around $80 billion. Facebook CEO Mark Zuckerberg came under intense scrutiny and criticism, the #DeleteFacebook movement was born, and the incident received wide media coverage. Elon Musk, the tech billionare and founder of Tesla, was one high profile deleter. Facebook, however, is only one morally questionable half of the story.

Cambridge Analytica was allegedly involved in influencing the outcomes of several high-profile elections, including the 2016 US election, the 2016 Brexit referendum, and the 2013 and 2017 Kenyan elections. Its methods involve data mining and analysis to more precisely tailor campaign materials to audiences and, as whistle blower Christopher Wylie put it, ‘target their inner demons.’1 The practice, known as ‘micro-targeting’, has become more common in the digital age of politics and aims to influence swing voter behaviour by using data and information to hone in on fears, anxieties, or attitudes which campaigns can use to their advantage. This was one of techniques used in Trump’s campaign, targeting the 50 million unsuspecting Americans whose Facebook data was misused. Further adding to the ethical unease, the company was founded by Republican key players Steve Bannon, later to become Trump’s chief strategist, and billionaire Republican donor Robert Mercer.

There are two broad issues raised by the incident.

Continue reading

Guest Post: Consequentialism and Ethics? Bridging the Normative Gap.

Written by Simon Beard

University of Cambridge

After years of deliberation, a US moratorium on so-called ‘gain of function’ experiments, involving the production of novel pathogens with a high degree of pandemic potential, has been lifted [https://www.nih.gov/about-nih/who-we-are/nih-director/statements/nih-lifts-funding-pause-gain-function-research]. At the same time, a ground-breaking new set of guidelines about how and when such experiments can be funded has been published [https://thebulletin.org/new-pathogen-research-rules-gain-function-loss-clarity11540] by the National Institutes of Health. This is to be welcomed, and I hope that these guidelines stimulate broader discussions about the ethics and funding of duel use scientific research, both inside and outside of the life sciences. At the very least, it is essential that people learn from this experience and do not engage in the kind of intellectual head banging that has undermined important research, and disrupted the careers of talented researchers.

Yet, there is something in these guidelines that many philosophers may find troubling.

These new guidelines insist, for the first time it seems, that NIH funding will depend not only on the benefits of scientific research outweighing the potential risks, but also on whether or not the research is “ethically justified”. In defining what is ethically justifiable, the NIH make specific reference to standards of beneficence, non-maleficence, justice, scientific freedom, respect for persons and responsible stewardship.

Much has been made of this additional dimension of evaluation and whether or not review committees will be up to assessing it. Whereas before, it is said, they merely had to assess whether research would have good or bad outcomes, they now have to determine whether it is right or wrong as well! Continue reading

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz

Crosspost from Slate.  Click here to read the full article

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

This article was originally published on Slate.  To read the full article and to join in the conversation please follow this link.

Announcement: Medical Ethics Symposium on Health Care Rationing – Oxford June 20th. Registration Now Open

Practical medical ethics: Rationing responsibly in an age of austerity
Date: June 20th 2018, 2-5pm, includes refreshments
Location: Ship Street Centre, Jesus College, Oxford

Health professionals face ever expanding possibilities for medical treatment, increasing patient expectations and at the same time intense pressures to reduce healthcare costs. This leads frequently to conflicts between obligations to current patients, and others who might benefit from treatment.

Is it ethical for doctors and other health professionals to engage in bedside rationing? What ethical principles should guide decisions (for example about which patients to offer intensive care admission or surgery)? Is it discriminatory to take into account disability in allocating resources? If patients are responsible for their illness, should that lead to a lower priority for treatment?

In this seminar philosophers from the Oxford Uehiro Centre for Practical Ethics will explore and shed light on the profound ethical challenges around allocating limited health care resources.

Speakers include Prof Dominic Wilkinson, Professor Julian Savulescu, Dr Rebecca Brown. Guest lecture by Professor Thaddeus Pope (Professor of Law, Mitchell Hamline School of Law, Minnesota) – on the US approach to allocating organs

Topics  include:

  • Allocating intensive care beds and balancing ethical values
  • Moralising medicine – is it ethical to allocate treatment based on responsibility for illness?
  • Cost-equivalence – rethinking treatment allocation.

This seminar is aimed at health professionals/ethicists

There are strictly limited places. Early bird registration £15/10* if register by 29th April and £25/20* subsequently

*Discounted registration for students

Registration includes tea/coffee and Wine/soft drinks/cheese at the end

Online Registration 

*Philosophical case discussion and *prize*

The afternoon will conclude with a live “ethics committee” deliberation on a clinical case.
Attendees at the meeting are encouraged to submit a case for discussion based on their clinical experience.
If chosen for presentation, attendees will have the opportunity to present a short (5 minute) clinical summary.
They will also receive complementary registration at the seminar, and a £40 Blackwell’s voucher.

To submit a case, please send a short (less than 200 word) deidentified case description including the key ethical questions to dominic.wilkinson@philosophy.ox.ac.uk

Oxford Uehiro Prize in Practical Ethics: When is Sex With Conjoined Twins Permissible?

This essay was the runner up in the Oxford Uehiro Prize in Practical Ethics Graduate Category

Written by University of Oxford student James Kirkpatrick

It is widely accepted that valid consent is necessary for the permissibility of sexual acts. This requirement explains why it is impermissible to have sex with non-human animals, children, and agents with severe cognitive impairments. This paper explores the implications of this requirement for the conditions under which
conjoined twins may have sex.[1] I will argue that sex with conjoined twins is impermissible if one of them does not consent. This observation generalises to prohibitions on a wide range of everyday activities, such as masturbation, blood donations, and taking drugs to cure one’s headache. While these implications are
highly counterintuitive, it is dificult to articulate the relevant moral difference between these cases. Continue reading

Oxford Uehiro Prize in Practical Ethics: The Paradox of the Benefiting Samaritan

This essay was the winner in the Oxford Uehiro Prize in Practical Ethics Graduate Category

Written by University of Oxford student Miles Unterreiner

 

Question to be answered: Why is it wrong to benefit from injustice?

In the 2005 film Thank You for Smoking, smooth-talking tobacco company spokesman Nick Naylor (Aaron Eckhart) is charged with publicly defending the interests of Big Tobacco. Naylor is invited to a panel discussion on live TV, where he faces an unfriendly studio audience; Robin Williger, a 15-year-old cancer patient who has recently quit smoking; and anti-smoking crusader Ron Goode, who works for an organization dedicated to fighting tobacco consumption. Naylor boldly goes on the attack against Goode, accusing him and his organization of benefiting from the well-publicized deaths of lung cancer patients:

Naylor: The Ron Goodes of this world want the Robin Willigers to die.

Goode: What?

 Naylor: You know why? So that their budgets will go up. This is nothing less than trafficking in human misery, and you, sir, ought to be ashamed of yourself. Continue reading

Oxford Uehiro Prize in Practical Ethics: Why We Should Genetically ‘Disenhance’ Animals Used in Factory Farms

This essay was the winner in the Oxford Uehiro Prize in Practical Ethics Undergraduate Category

Written by University of Oxford student Jonathan Latimer

 I will defend the process of genetic ‘disenhancement’ of animals used for factory farming. I suggest that disenhancement will significantly increase the quality of life for animals in factory farms, and that this benefit is robust against objections that disenhancement is harmful to animals and that it fails to address the immorality of factory farming. Contra to a previous submission, I hope to recast disenhancement as something which ought to be seriously considered on behalf of animals in factory farms.

Currently, the factory farming of livestock animals for human consumption causes a great amount of suffering in those animals. It is widely acknowledged that the conditions many animals face in factory farms are abhorrent. Furthermore, demand for factory-farmed meat is increasing worldwide as developing economies grow more affluent. This will lead to more animals suffering in factory farms in the future. One potential solution to this problem is the ‘disenhancement’ of livestock animals. Disenhancement is a genetic modification that removes an animal’s capacity to feel pain. Scientists hope to be able to do this without inflicting any pain at all. So, disenhancement promises to reduce suffering in factory-farmed animals by removing their capacity to feel pain caused by their terrible environment. Continue reading

Oxford Uehiro Prize in Practical Ethics: On Relational Injustice: Could Colonialism Have Been Wrong Even if it Had Introduced More Benefits Than Harms?

This essay was awarded second place in the Oxford Uehiro Prize in Practical Ethics Undergraduate Category.

Written by University of Oxford student, Brian Wong

Recent debates over the legacy of colonialism – such that that of the British Empire – have often been centred around whether members of colonies have, on balance, benefited from being subject to colonial rule. Such debates are not only epistemically futile, for counterfactual analysis remains necessarily and largely speculative hitherto; they also neglect a potential alternative to the discussion: that colonial projects could have been wrong independent of the harms they bring.

My thesis is that there existed the unoffsettable wrong of the relational injustice perpetuated under colonialism, such that colonialism was wrong even in cases where it introduced counterfactual-sensitive benefits. I will first discuss my concept of relational injustice, prior to establishing the empirical premise and explaining why such wrongs are unoffsettable by consequentialist gains. Continue reading

Authors

Subscribe Via Email

Affiliations