admin

Announcement: Oxford Uehiro Prize in Practical Ethics

Graduate and undergraduate students currently enrolled at the University of Oxford in any subject are invited to enter the Oxford Uehiro Prize in Practical Ethics by submitting an essay of up to 2000 words on any topic relevant to practical ethics.  Eligibility includes visiting students who are registered as recognized students, and paying fees, but does not include informal visitors.  Two undergraduate papers and two graduate papers will be shortlisted from those submitted to go forward to a public presentation and discussion, where the winner of each category will be selected. 

The winner from each category will receive £300, and the runner up £100. Revised versions of the two winning essays will be considered for publication in the Journal of Practical Ethics, though publication is not guaranteed.

To enter, please submit your written papers by the end of 6th February 2019 to rocci.wilkinson@philosophy.ox.ac.uk. Finalists will be notified in mid February. The public presentation will take place in 8th Week, Hilary term 2019, on Tuesday 5th March. Please save this presentation date, as you will need to attend if selected as a finalist. 

Detailed instructions 

Continue reading

Response from David S. Oderberg to “Against Conscientious Objection In Health Care: A Counterdeclaration And Reply To Oderberg”

I am grateful to Prof. Savulescu and Dr Giubilini for taking the time and care to respond in detail to my Declaration in Support of Conscientious Objection in Health Care. I also thank Prof. Savulescu for giving me the opportunity to reply to their lengthy analysis. The authors make a series of important criticisms and observations, all of which I will face directly. The topic of freedom of conscience in medicine is both contentious and likely to become increasingly urgent in the future, so it is as well to dispel misunderstandings, clarify assertions and respond to objections as thoroughly as possible. That said, I hope I do not try the reader’s patience by discussing Giubilini and Savulescu’s objections point by point, in the order in which they raise them.

Continue reading

Lecture and Book Launch: Ethics, Conflict and Medical Treatment for Children – From Disagreement to Dissensus

Watch the lecture by Professors Dominic Wilkinson and Julian Savulescu at the book launch for ‘Ethics, Conflict and Medical Treatment for Children’, which took place on 4 October at the Oxford Martin School, University of Oxford.

 

In Defence of Trashing

Written by University of Oxford DPhil student, Tena Thau

Prior to this year’s final exams, Oxford University announced a crackdown on “trashing,” the post-exam tradition of dousing finalists in champagne, ‘silly string,’ confetti, and the like.   In conjunction with this announcement, the University released a memo outlining its objections to trashing.

In Part I of this post, I will present a point-by-point refutation of the arguments made in this memo. In Part II, I will sketch out what I think is the central moral concern with trashing: that it is an expression of elitism.  I will conclude that this ‘elitism objection’ to trashing should be rejected, showing why it is not trashing – but rather, the campaign against it – that is guilty of elitism. Continue reading

Should Abortion be a Matter of Referendum?

Alberto Giubilini
Wellcome Centre for Ethics and Humanities and Oxford Martin School, University of Oxford

I am writing this post on the 25th of May, as the Irish abortion referendum is taking place. However, you will probably be reading it once the results are already known. I am not going to write in support of either side of the debate here anyway. I want to write about the appropriateness (from an ethical point of view) of this referendum itself. I want to suggest that a referendum is not the appropriate way to solve the dispute at stake.

Irish people have been asked whether they wanted to repeal the Eight Amendment of the Irish Constitution, which gives foetuses and pregnant women an “equal right to life”. It is commonly assumed that the Eight Amendment was preventing the Irish Government from legalizing abortion, except in extreme and very rare circumstances in which abortion is necessary to save the life of a pregnant woman. If the majority of Irish people votes “yes”, abortion can become legal in the country. If the majority votes “no”, abortion will remain a crime in the country, with the exception of a few extreme and very rare circumstances. More specifically, voting “no” means voting in favour of the idea that in Ireland a foetus does have a right to life equal to the right to life of the woman. Voting “yes” means voting in favour of the idea that in Ireland the foetus does not have a right to life comparable to the right to life of a woman; in other words, that it can be considered merely as part of the woman’s body for the purpose of attributing it a right to life (though not necessarily for other purposes), and therefore something that a woman can permissibly decide not to keep alive as a matter of bodily autonomy or, in many cases, and depending on what definition of “health” we adopt, as a matter of basic healthcare.

Continue reading

Announcement: Vacancy Research Fellow in Applied Moral Philosophy

Applications are invited for a full-time Research Fellow position (Grade 7: £31,604 – £38,883 p.a.) to conduct research in philosophy and applied ethics for the research project: Neurointerventions in Crime Prevention: An Ethical Analysis, which is hosted by the Oxford Uehiro Centre for Practical Ethics within the Faculty of Philosophy.

This post is fixed-term for 1 year from the date of appointment with excellent opportunities for career advancement.

The Fellow will conduct collaborative research under the supervision of Dr Thomas Douglas (Principal Investigator for the research project), with a focus on ‘The Ethics of Environmental and Biological Behavioural Influence in Crime Prevention’, examining the nature and moral status of different kinds of behavioural influence (including coercion, manipulation, nudging and biological intervention).

The Fellow will produce publications of high quality research, undertake literature reviews, and participate in other project activities. This participation may involve developing collaborative relationships, contributing to public engagement activities, grant applications and event planning, and performing other occasional duties such as event organisation, administration and teaching.

The postholder is required to hold the degree of PhD (or equivalent), or be a doctoral candidate near completion, in philosophy or other relevant discipline (such as law or political theory) with specialisation in applied ethics, normative ethics, political philosophy or other related sub-discipline. Also essential are excellent research skills, an outstanding research record, and demonstrated ability to publish in journals in applied ethics, normative ethics, or political philosophy.

Applications are to be submitted no later than 12.00 midday on Friday 8 June 2018. Further details including how to apply

Ethical AI Kills Too: An Assement of the Lords Report on AI in the UK

Hazem Zohny and Julian Savulescu
Cross-posted with the Oxford Martin School

Developing AI that does not eventually take over humanity or turn the world into a dystopian nightmare is a challenge. It also has an interesting effect on philosophy, and in particular ethics: suddenly, a great deal of the millennia-long debates on the good and the bad, the fair and unfair, need to be concluded and programmed into machines. Does the autonomous car in an unavoidable collision swerve to avoid killing five pedestrians at the cost of its passenger’s life? And what exactly counts as unfair discrimination or privacy violation when “Big Data” suggests an individual is, say, a likely criminal?

The recent House of Lords Artificial Intelligence Committee’s report acknowledges the centrality of ethics to AI front and centre. It engages thoughtfully with a wide range of issues: algorithmic bias, the monopolised control of data by large tech companies, the disruptive effects of AI on industries, and its implications for education, healthcare, and weaponry.

Many of these are economic and technical challenges. For instance, the report notes Google’s continued inability to fix its visual identification algorithms, which it emerged three years ago could not distinguish between gorillas and black people. For now, the company simply does not allow users of Google Photos to search for gorillas.

But many of the challenges are also ethical – in fact, central to the report is that while the UK is unlikely to lead globally in the technical development of AI, it can lead the way in putting ethics at the centre of AI’s development and use.

Continue reading

Guest Post: Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu, Medical Student, Monash University

When news broke that Facebook data from 50 million American users had been harvested and misused, and that Facebook had kept silent about it for two years, the 17th of March 2018 became a bad day for the mega-corporation. In the week following what became known as the Cambridge Analytica scandal, Facebook’s market value fell by around $80 billion. Facebook CEO Mark Zuckerberg came under intense scrutiny and criticism, the #DeleteFacebook movement was born, and the incident received wide media coverage. Elon Musk, the tech billionare and founder of Tesla, was one high profile deleter. Facebook, however, is only one morally questionable half of the story.

Cambridge Analytica was allegedly involved in influencing the outcomes of several high-profile elections, including the 2016 US election, the 2016 Brexit referendum, and the 2013 and 2017 Kenyan elections. Its methods involve data mining and analysis to more precisely tailor campaign materials to audiences and, as whistle blower Christopher Wylie put it, ‘target their inner demons.’1 The practice, known as ‘micro-targeting’, has become more common in the digital age of politics and aims to influence swing voter behaviour by using data and information to hone in on fears, anxieties, or attitudes which campaigns can use to their advantage. This was one of techniques used in Trump’s campaign, targeting the 50 million unsuspecting Americans whose Facebook data was misused. Further adding to the ethical unease, the company was founded by Republican key players Steve Bannon, later to become Trump’s chief strategist, and billionaire Republican donor Robert Mercer.

There are two broad issues raised by the incident.

Continue reading

Guest Post: Consequentialism and Ethics? Bridging the Normative Gap.

Written by Simon Beard

University of Cambridge

After years of deliberation, a US moratorium on so-called ‘gain of function’ experiments, involving the production of novel pathogens with a high degree of pandemic potential, has been lifted [https://www.nih.gov/about-nih/who-we-are/nih-director/statements/nih-lifts-funding-pause-gain-function-research]. At the same time, a ground-breaking new set of guidelines about how and when such experiments can be funded has been published [https://thebulletin.org/new-pathogen-research-rules-gain-function-loss-clarity11540] by the National Institutes of Health. This is to be welcomed, and I hope that these guidelines stimulate broader discussions about the ethics and funding of duel use scientific research, both inside and outside of the life sciences. At the very least, it is essential that people learn from this experience and do not engage in the kind of intellectual head banging that has undermined important research, and disrupted the careers of talented researchers.

Yet, there is something in these guidelines that many philosophers may find troubling.

These new guidelines insist, for the first time it seems, that NIH funding will depend not only on the benefits of scientific research outweighing the potential risks, but also on whether or not the research is “ethically justified”. In defining what is ethically justifiable, the NIH make specific reference to standards of beneficence, non-maleficence, justice, scientific freedom, respect for persons and responsible stewardship.

Much has been made of this additional dimension of evaluation and whether or not review committees will be up to assessing it. Whereas before, it is said, they merely had to assess whether research would have good or bad outcomes, they now have to determine whether it is right or wrong as well! Continue reading

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz

Crosspost from Slate.  Click here to read the full article

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

This article was originally published on Slate.  To read the full article and to join in the conversation please follow this link.

Authors

Subscribe Via Email

Affiliations