Skip to content

Information Ethics

Guest Post: Cambridge Analytica: You Can Have My Money but Not My Vote

  • by

Emily Feng-Gu, Medical Student, Monash University

When news broke that Facebook data from 50 million American users had been harvested and misused, and that Facebook had kept silent about it for two years, the 17th of March 2018 became a bad day for the mega-corporation. In the week following what became known as the Cambridge Analytica scandal, Facebook’s market value fell by around $80 billion. Facebook CEO Mark Zuckerberg came under intense scrutiny and criticism, the #DeleteFacebook movement was born, and the incident received wide media coverage. Elon Musk, the tech billionare and founder of Tesla, was one high profile deleter. Facebook, however, is only one morally questionable half of the story.

Cambridge Analytica was allegedly involved in influencing the outcomes of several high-profile elections, including the 2016 US election, the 2016 Brexit referendum, and the 2013 and 2017 Kenyan elections. Its methods involve data mining and analysis to more precisely tailor campaign materials to audiences and, as whistle blower Christopher Wylie put it, ‘target their inner demons.’1 The practice, known as ‘micro-targeting’, has become more common in the digital age of politics and aims to influence swing voter behaviour by using data and information to hone in on fears, anxieties, or attitudes which campaigns can use to their advantage. This was one of techniques used in Trump’s campaign, targeting the 50 million unsuspecting Americans whose Facebook data was misused. Further adding to the ethical unease, the company was founded by Republican key players Steve Bannon, later to become Trump’s chief strategist, and billionaire Republican donor Robert Mercer.

There are two broad issues raised by the incident.

Read More »Guest Post: Cambridge Analytica: You Can Have My Money but Not My Vote

Can We Trust Research in Science and Medicine?

By Brian D. Earp  (@briandavidearp) Readers of the Practical Ethics Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we… Read More »Can We Trust Research in Science and Medicine?

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas

 

The Durham Police force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART has not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper. Here we address one that it is particularly apposite with respect to HART: how should we balance the conflicting moral values at stake in deciding the kind of accuracy we want such tools to prioritise?

Read More »Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

The Clickbait Candidate

By James Williams (@WilliamsJames_)
Note: This is a cross-post with Quillette magazine.

While ‘interrobang’ sounds like a technique Donald Trump might add to the Guantanamo Bay playbook, it in fact refers to a punctuation mark: a disused mashup of interrogation and exclamation that indicates shock, surprise, excitement, or disbelief. It looks like this: ‽ (a rectangle means your font doesn’t support the symbol). In view of how challenging it seems for anyone to articulate the fundamental weirdness of Trump’s proximity to the office of President of the United States, I propose that we resuscitate the interrobang, because our normal orthographic tools clearly are not up to the task.

Yet even more interrobang-able than the prospect of a Trump presidency is the fact that those opposing his candidacy seem to have almost no understanding of the media dynamics that have enabled it to rise and thrive. Trump is perhaps the most straightforward embodiment of the dynamics of the so-called ‘attention economy’—the pervasive, all-out war over our attention in which all of our media have now been conscripted—that the world has yet seen. He is one of the geniuses of our time in the art of attentional manipulation.

If we ever hope to have a societal conversation about the design ethics of the attention economy—especially the ways in which it incentivizes technology design to push certain buttons in our brains that are incompatible with the assumptions of democracy—now would be the time.Read More »The Clickbait Candidate

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Read More »Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

DNA papers, please

Kuwait is planning to build a complete DNA database of not just citizens but all other residents and temporary visitorsThe motivation is claimed to be antiterrorism (the universal motivation!) and fighting crime. Many are outraged, from local lawyers over a UN human rights committee to the European Society of Human Genetics, and think that it will not be very helpful against terrorism (how does having the DNA of a suicide bomber help after the fact?) Rather, there are reasons to worry about misuse in paternity testing (Kuwait has strict adultery laws),  and in the politics of citizenship (which provides many benefits): it is strictly circumscribed to paternal descendants of the original Kuwaiti settlers, and there is significant discrimination against people with no recognized paternity such as the Bidun minority. Plus, and this might be another strong motivation for many of the scientists protesting against the law, it might put off public willingness to donate their genomes into research databases where they actually do some good. Obviously it might also put visitors off visiting – would, for example, foreign heads of state accept leaving their genome in the hands of another state? Not to mention the discovery of adultery in ruling families – there is a certain gamble in doing this.

Overall, it seems few outside the Kuwaiti government are cheering for the law. When I recently participated in a panel discussion organised by the BSA at the Wellcome Collection about genetic privacy, at the question “Would anybody here accept mandatory genetic collection?” only one or two hands rose in the large audience. When would it make sense to make mandatory genetic information collection?Read More »DNA papers, please

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Read More »Hide your face?

The unbearable asymmetry of bullshit

By Brian D. Earp (@briandavidearp)

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

Read More »The unbearable asymmetry of bullshit

What’s the moral difference between ad blocking and piracy?

On 16 September Marco Arment, developer of Tumblr, Instapaper and Overcast, released a new iPhone and iPad app called Peace. It quickly shot to the top of the paid app charts, but Arment began to have moral qualms about the app, and its unexpected success, and two days after its release, he pulled it from the app store.

Why the qualms? For the full story, check out episode 136 of Arment’s excellent Accidental Tech Podcast and this blog post, but here’s my potted account: Peace is an ad blocker. It allows users to view webpages without advertisements. Similar software has been available for Macs and PCs for years (I use it to block some ads on my laptop), but Apple has only just made ad blockers possible on mobile devices, and Peace was one of a bunch of new apps to take advantage of this possibility. Although ad blockers help web surfers to avoid the considerable annoyance (and aesthetic unpleasantness) of webpage ads, they also come at a cost to content providers, potentially reducing their advertising revenue. According to Arment, the ethics of ad blocking is ‘complicated’, and although he still believes ad blockers should exist, and continues to use them, he thinks their downsides are serious enough that he wasn’t comfortable with being at the forefront of the ad blocking movement himself.

In explaining his reasons for withdrawing the app, Arment drew a parallel between ad blocking and piracy. He doesn’t claim that the analogy is perfect (in fact, he explicitly disavows this), and nor does he take it to be a knock-down objection to ad-blocking (presumably he believes that piracy is also morally complicated). But he does think there’s something to the comparison.

Like Arment, I think there are considerable moral similarities between ad blocking and piracy. But, also like Arment, ad blocking seems to me, intuitively, to be somewhat less morally problematic. This raises an obvious question: what’s the moral difference?

Read More »What’s the moral difference between ad blocking and piracy?

Don’t write evil algorithms

Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.

Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.

Can we make and use algorithms more ethically?

Read More »Don’t write evil algorithms