Oxford Uehiro Prize in Practical Ethics: What, if Anything, is Wrong About Algorithmic Administration?

This essay received an honourable mention in the undergraduate category.

Written by University of Oxford student, Angelo Ryu.



 The scope of modern administration is vast. We expect the state to perform an ever-increasing number of tasks, including the provision of services and the regulation of economic activity. This requires the state to make a large number of decisions in a wide array of areas. Inevitably, the scale and complexity of such decisions stretch the capacity of good governance.

In response, policymakers have begun to implement systems capable of automated decision making. For example, certain jurisdictions within the United States use an automated system to advise on criminal sentences. Australia uses an automated system for parts of its welfare program.

Such systems, it is said, will help address the costs of modern administration. It is plausibly argued that automation will lead to quicker, efficient, and more consistent decisions – that it will ward off a return to the days of Dickens’ Bleak House. Continue reading

Oxford Uehiro Prize in Practical Ethics: Why Is Virtual Wrongdoing Morally Disquieting, Insofar As It Is?

This essay was the winning entry in the undergraduate category of the 6th Annual Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Eric Sheng.

In the computer game Red Dead Redemption 2 (henceforward, RDR2), players control a character in a virtual world. Among the characters represented by computer graphics but not controlled by a real-world player are suffragettes. Controversy arose when it became known that some players used their characters to torture or kill suffragettes. (One player’s character, for example, feeds a suffragette to an alligator.) In this essay, I seek to explain the moral disquiet ­– the intuition that things are awry from the moral perspective – that the players’ actions (call them, for short, ‘assaulting suffragettes’) provoke. The explanation will be an exercise in ‘moral psychology, philosophical not psychological’:[1] I seek not to causally explain our disquiet through the science of human nature, but to explain why things are indeed awry, and thus justify our disquiet.

My intention in posing the question in this way is to leave open the possibilities that our disquiet is justified although the players’ actions are not wrong, or that it’s justified but not principally by the wrongness of the players’ actions. These possibilities are neglected by previous discussions of virtual wrongdoing that ask: is this or that kind of virtual wrongdoing wrong? Indeed, I argue that some common arguments for the wrongness of virtual wrongdoing do not succeed in explaining our disquiet, and sketch a more plausible account of why virtual wrongdoing is morally disquieting insofar as it is, which invokes not the wrongness of the players’ actions but what these actions reveal about the players. By ‘virtual wrongdoing’ I mean an action by a player in the real world that intentionally brings about an action φV by a character in a virtual world V such that φV is wrong-in-V; and the criteria for evaluating an action’s wrongness-in-V are the same as those for evaluating an action’s wrongness in the real world.[2] Continue reading

Cross Post: Privacy is a Collective Concern: When We Tell Companies About Ourselves, We Give Away Details About Others, Too.


This article was originally published in New Statesman America

Making Ourselves Better

Written by Stephen Rainey

Human beings are sometimes seen as uniquely capable of enacting life plans and controlling our environment. Take technology, for instance; with it we make the world around us yield to our desires in various ways. Communication technologies, and global transport, for example, have the effect of practically shrinking a vast world, making hitherto impossible coordination possible among a global population. This contributes to a view of human-as-maker, or ‘homo faber‘. But taking such a view can risk minimising human interests that ought not to be ignored.

Homo faber is a future-oriented, adaptable, rational animal, whose efforts are aligned with her interests when she creates technology that enables a stable counteraction of natural circumstance. Whereas animals are typically seen to have well adapted responses to their environment, honed through generations of adaptation, human beings appear to have instead a general and adaptable skill that can emancipate them from material, external circumstances. We are bad at running away from danger, for instance, but good at building barriers to obviate the need to run. The protections this general, adaptable skill offer are inherently future-facing: humans seem to seek not to react to, but to control the environment.

Continue reading

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Continue reading

Better Living Through Neurotechnology

Written by Stephen Rainey

If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.

Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.

This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties? Continue reading

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz

Crosspost from Slate.  Click here to read the full article

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

This article was originally published on Slate.  To read the full article and to join in the conversation please follow this link.

The ‘Killer Robots’ Are Us

Written by Dr Michael Robillard

In a recent New York Times article Dr Michael Robillard writes: “At a meeting of the United Nations Convention on Conventional Weapons in Geneva in November, a group of experts gathered to discuss the military, legal and ethical dimensions of emerging weapons technologies. Among the views voiced at the convention was a call for a ban on what are now being called “lethal autonomous weapons systems.”

A 2012 Department of Defense directive defines an autonomous weapon system as one that, “once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.” “

Follow this link to read the article in full.

Cross Post: Machine Learning and Medical Education: Impending Conflicts in Robotic Surgery

Guest Post by Nathan Hodson 

* Please note that this article is being cross-posted from the Journal of Medical Ethics Blog 

Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.

This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.

Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”

Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?

Continue reading

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Continue reading