Information Ethics

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas


The Durham Police force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART has not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper. Here we address one that it is particularly apposite with respect to HART: how should we balance the conflicting moral values at stake in deciding the kind of accuracy we want such tools to prioritise?

Continue reading

The Clickbait Candidate

By James Williams (@WilliamsJames_)
Note: This is a cross-post with Quillette magazine.

While ‘interrobang’ sounds like a technique Donald Trump might add to the Guantanamo Bay playbook, it in fact refers to a punctuation mark: a disused mashup of interrogation and exclamation that indicates shock, surprise, excitement, or disbelief. It looks like this: ‽ (a rectangle means your font doesn’t support the symbol). In view of how challenging it seems for anyone to articulate the fundamental weirdness of Trump’s proximity to the office of President of the United States, I propose that we resuscitate the interrobang, because our normal orthographic tools clearly are not up to the task.

Yet even more interrobang-able than the prospect of a Trump presidency is the fact that those opposing his candidacy seem to have almost no understanding of the media dynamics that have enabled it to rise and thrive. Trump is perhaps the most straightforward embodiment of the dynamics of the so-called ‘attention economy’—the pervasive, all-out war over our attention in which all of our media have now been conscripted—that the world has yet seen. He is one of the geniuses of our time in the art of attentional manipulation.

If we ever hope to have a societal conversation about the design ethics of the attention economy—especially the ways in which it incentivizes technology design to push certain buttons in our brains that are incompatible with the assumptions of democracy—now would be the time. Continue reading

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Continue reading

DNA papers, please

Kuwait is planning to build a complete DNA database of not just citizens but all other residents and temporary visitorsThe motivation is claimed to be antiterrorism (the universal motivation!) and fighting crime. Many are outraged, from local lawyers over a UN human rights committee to the European Society of Human Genetics, and think that it will not be very helpful against terrorism (how does having the DNA of a suicide bomber help after the fact?) Rather, there are reasons to worry about misuse in paternity testing (Kuwait has strict adultery laws),  and in the politics of citizenship (which provides many benefits): it is strictly circumscribed to paternal descendants of the original Kuwaiti settlers, and there is significant discrimination against people with no recognized paternity such as the Bidun minority. Plus, and this might be another strong motivation for many of the scientists protesting against the law, it might put off public willingness to donate their genomes into research databases where they actually do some good. Obviously it might also put visitors off visiting – would, for example, foreign heads of state accept leaving their genome in the hands of another state? Not to mention the discovery of adultery in ruling families – there is a certain gamble in doing this.

Overall, it seems few outside the Kuwaiti government are cheering for the law. When I recently participated in a panel discussion organised by the BSA at the Wellcome Collection about genetic privacy, at the question “Would anybody here accept mandatory genetic collection?” only one or two hands rose in the large audience. When would it make sense to make mandatory genetic information collection? Continue reading

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Continue reading

The unbearable asymmetry of bullshit

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine. The official version is forthcoming in the HealthWatch Newsletter; see


Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

Continue reading

What’s the moral difference between ad blocking and piracy?

On 16 September Marco Arment, developer of Tumblr, Instapaper and Overcast, released a new iPhone and iPad app called Peace. It quickly shot to the top of the paid app charts, but Arment began to have moral qualms about the app, and its unexpected success, and two days after its release, he pulled it from the app store.

Why the qualms? For the full story, check out episode 136 of Arment’s excellent Accidental Tech Podcast and this blog post, but here’s my potted account: Peace is an ad blocker. It allows users to view webpages without advertisements. Similar software has been available for Macs and PCs for years (I use it to block some ads on my laptop), but Apple has only just made ad blockers possible on mobile devices, and Peace was one of a bunch of new apps to take advantage of this possibility. Although ad blockers help web surfers to avoid the considerable annoyance (and aesthetic unpleasantness) of webpage ads, they also come at a cost to content providers, potentially reducing their advertising revenue. According to Arment, the ethics of ad blocking is ‘complicated’, and although he still believes ad blockers should exist, and continues to use them, he thinks their downsides are serious enough that he wasn’t comfortable with being at the forefront of the ad blocking movement himself.

In explaining his reasons for withdrawing the app, Arment drew a parallel between ad blocking and piracy. He doesn’t claim that the analogy is perfect (in fact, he explicitly disavows this), and nor does he take it to be a knock-down objection to ad-blocking (presumably he believes that piracy is also morally complicated). But he does think there’s something to the comparison.

Like Arment, I think there are considerable moral similarities between ad blocking and piracy. But, also like Arment, ad blocking seems to me, intuitively, to be somewhat less morally problematic. This raises an obvious question: what’s the moral difference?

Continue reading

Don’t write evil algorithms

Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.

Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.

Can we make and use algorithms more ethically?

Continue reading

A Code of Conduct for Peer Reviewers in the Humanities and Social Sciences

1. The fact that you disagree with the author’s conclusion is not a reason for advising against publication. Quite the contrary, in fact. You have been selected as a peer reviewer because of your eminence, which means (let’s face it), your conservatism. Accordingly if you think the conclusion is wrong, it is far more likely to generate interest and debate than if you agree with it.

2. A very long review will simply indicate to the editors that you’ve got too much time on your hands. And if you have, that probably indicates that you’re not publishing enough yourself. Accordingly excessive length indicates that you’re not appropriately qualified. Continue reading

What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.

Continue reading


Subscribe Via Email