Information Ethics

Track Thyself? Personal Information Technology and the Ethics of Self-knowledge

Written by Muriel Leuenberger

The ancient Greek injunction “Know Thyself” inscribed at the temple of Delphi represents just one among many instances where we are encouraged to pursue self-knowledge. Socrates argued that “examining myself and others is the greatest good” and according to Kant moral self-cognition is ‘‘the First Command of all Duties to Oneself’’. Moreover, the pursuit of self-knowledge and how it helps us to become wiser, better, and happier is such a common theme in popular culture that you can find numerous lists online of the 10, 15, or 39 best movies and books on self-knowledge.

Continue reading

Peter Railton’s Uehiro Lectures 2022

Written by Maximilian Kiener

Professor Peter Railton, from the University of Michigan, delivered the 2022 Uehiro Lectures in Practical Ethics. In a series of three consecutive presentations entitled ‘Ethics and Artificial Intelligence’ Railton focused on what has become one the major areas in contemporary philosophy: the challenge of how to understand, interact with, and regulate AI.

Railton’s primary concern is not the ‘superintelligence’ that could vastly outperform humans and, as some have suggested, threaten human existence as a whole. Rather, Railton focuses on what we are already confronted with today, namely partially intelligent systems that increasingly execute a variety of tasks, from powering autonomous cars to assisting medical diagnostics, algorithmic decision-making, and more. Continue reading

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Continue reading

Ambient Intelligence

Written by Stephen Rainey

An excitingly futuristic world of seamless interaction with computers! A cybernetic environment that delivers what I want, when I want it! Or: A world of built on vampiric databases, fed on myopic accounts of movements and preferences, loosely related to persons. Each is a possibility given ubiquitous ambient intelligence. Continue reading

Ethics of the GameStop Short Squeeze

By Doug McConnell

Recently a large, loosely coordinated group of individual ‘retail investors’ have been buying up stocks that certain hedge funds had bet against (i.e. ‘shorted’). In doing so, the retail investors have driven up the price of those stocks. This has caused hedge funds that shorted the stock to lose billions of dollars and enabled a number of retail investors to get rich in the process. The phenomenon is anthropologically interesting because it is symbolic of a shift in power away from the traditional Wall Street players towards less wealthy, less well-connected individuals. But what are the ethics of this? Did Average Joe Trader just bring a measure of justice to Wall Street? Or did the mob unethically manipulate the market? If they did, are their actions any more unethical than the usual behaviour of institutional investors? Continue reading

The Doctor-Knows-Best NHS Foundation Trust: a Business Proposal for the Health Secretary

By Charles Foster

Informed consent, in practice, is a bad joke. It’s a notion created by lawyers, and like many such notions it bears little relationship to the concerns that real humans have when they’re left to themselves, but it creates many artificial, lucrative, and expensive concerns.

Of course there are a few clinical situations where it is important that the patient reflects deeply and independently on the risks and benefits of the possible options, and there are a few people (I hope never to meet them: they would be icily un-Falstaffian) whose sole ethical lodestone is their own neatly and indelibly drafted life-plan. But those situations and those people are fortunately rare. Continue reading

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Continue reading

The Gulf Between Japanese and English Google Image Search

By Anri Asagumo, Oxford Uehiro/St Cross Scholar, (with input from Dr Tom Douglas and Dr Carissa Veliz)


Trigger Warning: This article deals with sexual violence, which could be potentially upsetting for some people.

Although Google claims in its policy that it restricts promotion of adult-oriented content, there is a district in the online world where their policy implementation seems loose: Google image search in the Japanese language. If one looks up ‘reipu’, a Japanese word for rape on Google, the screen fills up with a heap of explicit thumbnails of porn movies, manga, and pictures of women being raped by men. The short descriptions of the thumbnails are repugnant: ‘Raping a girl at my first workplace’, ‘Raping a junior high-school girl’, ’Raping cute girls’, ‘Raping a female lawyer’, ‘Raping a girl in a toilet’. As if rape in itself were not repulsive enough, many descriptions go even further, implying child rape. Similar results show up with ‘reipu sareta; I was raped’. It is strikingly different from the world of English Google image search, in which the top images usually send strong messages of support for victims and zero-tolerance for sexual offenders. Another example of how the Japanese Google world is different from that of English is ‘Roshia-jin; Russian people’. Searching in Japanese yields 17 pictures of young, beautiful Russian women, while searching in English returns pictures of people of different age and sex. Continue reading

The Dangers of Biography

By Charles Foster

A friend of mine has written a brilliant and justly celebrated biography. I am worried about her, and about her readers.

The biography is brilliant and engaging precisely because of the degree of rapport the author has established with her subject, and the rapport she brokers between her subject and her readers. What is the cost of that rapport?

My friend has had to keep the company of her (dead) subject for years. Her book is an invitation to others to keep that company for hours. Two ethical questions arise. Continue reading

Listen Carefully

Written by Stephen Rainey, and Jason Walsh

Rhetoric about free speech as under attack is an enduring point of discussion across the media. It appears on the political agenda, in various degrees of concreteness and abstraction. By some definitions, free speech amounts to an unrestrained liberty to say whatever one pleases. On others, it’s carefully framed to exclude types of speech centrally intended to cause harm.

At the same time, more than ever the physical environment is a focus of both public and political attention. Following the BBC’s ‘Blue Planet Two’ documentary series, for instance, a huge impetus gathered around the risk of micro-plastics to our water supply, and, indeed, how plastics in general damage the environment. As with many such issues people have been happy to act. Following, belatedly, Ireland’s example, plastic bag use has plummeted in the UK, helped along by the introduction of a tax.

There are always those few who just don’t care but, when it comes to our shared natural spaces, we’re generally pretty good at reacting. Be it taxing plastic bags, switching to paper straws, or supporting pedestrianisation of polluted areas, there is the chance for open conversations about the spaces we must share. Environmental awareness and anti-pollution attitudes are as close to shared politics as we might get, at least in terms of what’s at stake. Can the same be said for the informational environment that we share? Continue reading