Information Ethics

The Dangers of Biography

By Charles Foster

A friend of mine has written a brilliant and justly celebrated biography. I am worried about her, and about her readers.

The biography is brilliant and engaging precisely because of the degree of rapport the author has established with her subject, and the rapport she brokers between her subject and her readers. What is the cost of that rapport?

My friend has had to keep the company of her (dead) subject for years. Her book is an invitation to others to keep that company for hours. Two ethical questions arise. Continue reading

Listen Carefully

Written by Stephen Rainey, and Jason Walsh

Rhetoric about free speech as under attack is an enduring point of discussion across the media. It appears on the political agenda, in various degrees of concreteness and abstraction. By some definitions, free speech amounts to an unrestrained liberty to say whatever one pleases. On others, it’s carefully framed to exclude types of speech centrally intended to cause harm.

At the same time, more than ever the physical environment is a focus of both public and political attention. Following the BBC’s ‘Blue Planet Two’ documentary series, for instance, a huge impetus gathered around the risk of micro-plastics to our water supply, and, indeed, how plastics in general damage the environment. As with many such issues people have been happy to act. Following, belatedly, Ireland’s example, plastic bag use has plummeted in the UK, helped along by the introduction of a tax.

There are always those few who just don’t care but, when it comes to our shared natural spaces, we’re generally pretty good at reacting. Be it taxing plastic bags, switching to paper straws, or supporting pedestrianisation of polluted areas, there is the chance for open conversations about the spaces we must share. Environmental awareness and anti-pollution attitudes are as close to shared politics as we might get, at least in terms of what’s at stake. Can the same be said for the informational environment that we share? Continue reading

Evil Online and the Moral Fog

The following is based on a brief presentation at the launch of Evil Online, by Dean Cocking and Jeroen van den Hoven and published by Wiley-Blackwell, in Bendigo, Australia, on 20 September 2018. It was an honour and a pleasure to be invited to speak, and I thank Dean for the opportunity. Continue reading

Ethical AI Kills Too: An Assement of the Lords Report on AI in the UK

Hazem Zohny and Julian Savulescu
Cross-posted with the Oxford Martin School

Developing AI that does not eventually take over humanity or turn the world into a dystopian nightmare is a challenge. It also has an interesting effect on philosophy, and in particular ethics: suddenly, a great deal of the millennia-long debates on the good and the bad, the fair and unfair, need to be concluded and programmed into machines. Does the autonomous car in an unavoidable collision swerve to avoid killing five pedestrians at the cost of its passenger’s life? And what exactly counts as unfair discrimination or privacy violation when “Big Data” suggests an individual is, say, a likely criminal?

The recent House of Lords Artificial Intelligence Committee’s report acknowledges the centrality of ethics to AI front and centre. It engages thoughtfully with a wide range of issues: algorithmic bias, the monopolised control of data by large tech companies, the disruptive effects of AI on industries, and its implications for education, healthcare, and weaponry.

Many of these are economic and technical challenges. For instance, the report notes Google’s continued inability to fix its visual identification algorithms, which it emerged three years ago could not distinguish between gorillas and black people. For now, the company simply does not allow users of Google Photos to search for gorillas.

But many of the challenges are also ethical – in fact, central to the report is that while the UK is unlikely to lead globally in the technical development of AI, it can lead the way in putting ethics at the centre of AI’s development and use.

Continue reading

Facebook, Big Data, and the Trust of the Public

By Mackenzie Graham

Facebook CEO Mark Zuckerburg recently appeared before members of the United States Congress to address his company’s involvement in the harvesting and improper distribution of approximately 87 million Facebook profiles —about 1 million of them British— to data collecting app Cambridge Analytica. In brief, Cambridge Analytica is a British political consulting firm, which uses online user data (like Facebook profiles), to construct profiles of subjects, which can then be used for what it calls ‘behavioural micro-targeting’; advertisements tailored to the recipient based on their internet activity. In 2016, Cambridge Analytica was contracted by Donald Trump’s presidential campaign, as well as the ‘Leave EU’ campaign prior to Britain’s referendum to leave the European Union.

Continue reading

Guest Post: Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu, Medical Student, Monash University

When news broke that Facebook data from 50 million American users had been harvested and misused, and that Facebook had kept silent about it for two years, the 17th of March 2018 became a bad day for the mega-corporation. In the week following what became known as the Cambridge Analytica scandal, Facebook’s market value fell by around $80 billion. Facebook CEO Mark Zuckerberg came under intense scrutiny and criticism, the #DeleteFacebook movement was born, and the incident received wide media coverage. Elon Musk, the tech billionare and founder of Tesla, was one high profile deleter. Facebook, however, is only one morally questionable half of the story.

Cambridge Analytica was allegedly involved in influencing the outcomes of several high-profile elections, including the 2016 US election, the 2016 Brexit referendum, and the 2013 and 2017 Kenyan elections. Its methods involve data mining and analysis to more precisely tailor campaign materials to audiences and, as whistle blower Christopher Wylie put it, ‘target their inner demons.’1 The practice, known as ‘micro-targeting’, has become more common in the digital age of politics and aims to influence swing voter behaviour by using data and information to hone in on fears, anxieties, or attitudes which campaigns can use to their advantage. This was one of techniques used in Trump’s campaign, targeting the 50 million unsuspecting Americans whose Facebook data was misused. Further adding to the ethical unease, the company was founded by Republican key players Steve Bannon, later to become Trump’s chief strategist, and billionaire Republican donor Robert Mercer.

There are two broad issues raised by the incident.

Continue reading

Can We Trust Research in Science and Medicine?

By Brian D. Earp  (@briandavidearp)

Readers of the Practical Ethics Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we judge whether someone is really an expert in a scientific area? What happens when empirical research gets polarized? Most of these are short – just a few minutes. Links below:

Why most published research probably is false

The politicization of science and the problem of expertise

Science’s publication bias problem – why negative results are important

Getting beyond accusations of being either “pro-science” or “anti-science”

Are we all scientific experts now? When to be skeptical about scientific claims, and when to defer to experts

Predatory open access publishers and why peer review is broken

The future of scientific peer review

Sloppy science going on at the CDC and WHO

Dogmas in science – how do they form?

Please note: this post will be cross-published with the Journal of Medical Ethics Blog.

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas

 

The Durham Police force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART has not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper. Here we address one that it is particularly apposite with respect to HART: how should we balance the conflicting moral values at stake in deciding the kind of accuracy we want such tools to prioritise?

Continue reading

The Clickbait Candidate

By James Williams (@WilliamsJames_)
Note: This is a cross-post with Quillette magazine.

While ‘interrobang’ sounds like a technique Donald Trump might add to the Guantanamo Bay playbook, it in fact refers to a punctuation mark: a disused mashup of interrogation and exclamation that indicates shock, surprise, excitement, or disbelief. It looks like this: ‽ (a rectangle means your font doesn’t support the symbol). In view of how challenging it seems for anyone to articulate the fundamental weirdness of Trump’s proximity to the office of President of the United States, I propose that we resuscitate the interrobang, because our normal orthographic tools clearly are not up to the task.

Yet even more interrobang-able than the prospect of a Trump presidency is the fact that those opposing his candidacy seem to have almost no understanding of the media dynamics that have enabled it to rise and thrive. Trump is perhaps the most straightforward embodiment of the dynamics of the so-called ‘attention economy’—the pervasive, all-out war over our attention in which all of our media have now been conscripted—that the world has yet seen. He is one of the geniuses of our time in the art of attentional manipulation.

If we ever hope to have a societal conversation about the design ethics of the attention economy—especially the ways in which it incentivizes technology design to push certain buttons in our brains that are incompatible with the assumptions of democracy—now would be the time. Continue reading

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Continue reading

Authors

Subscribe Via Email

Affiliations