Skip to content

Crime

Music Streaming, Hateful Conduct and Censorship

Written by Rebecca Brown

Last month, one of the largest music streaming services in the world, Spotify, announced a new ‘hate content and hateful conduct’ policy. In it, they state that “We believe in openness, diversity, tolerance and respect, and we want to promote those values through music and the creative arts.” They condemn hate content that “expressly and principally promotes, advocates, or incites hatred or violence against a group or individual based on characteristics, including, race, religion, gender identity, sex, ethnicity, nationality, sexual orientation, veteran status, or disability.” Content that is found to fulfil these criteria may be removed from the service, or may cease to be promoted, for example, through playlists and advertisements. Spotify further describe how they will approach “hateful conduct” by artists: 

We don’t censor content because of an artist’s or creator’s behavior, but we want our editorial decisions – what we choose to program – to reflect our values. When an artist or creator does something that is especially harmful or hateful (for example, violence against children and sexual violence), it may affect the ways we work with or support that artist or creator.

An immediate consequence of this policy was the removal from featured playlists of R. Kelly and XXXTentacion, two American R&B artists. Whilst the 20 year old XXXTentacion has had moderate success in the US, R. Kelly is one of the biggest R&B artists in the world. As a result, the decision not to playlist R. Kelly attracted significant attention, including accusations of censorship and racism. Subsequently, Spotify backtracked on their decision, rescinding the section of their policy on hateful conduct and announcing regret for the “vague” language of the policy which “left too many elements open to interpretation.” Consequently, XXXTentacion’s music has reappeared on playlists such as Rap Caviar, although R. Kelly has not (yet) been reinstated. The controversy surrounding R. Kelly and Spotify raises questions about the extent to which commercial organisations, such as music streaming services, should make clear moral expressions. 
Read More »Music Streaming, Hateful Conduct and Censorship

Video Series: Tom Douglas Defends the Chemical Castration of Sex Offenders

The Minister of Justice in the UK wants to dramatically increase the use of chemical castration in sex offenders to reduce their risk of reoffending.Dr Tom Douglas (University of Oxford) argues that offering chemical castration to sex offenders might be a better option than current practices to prevent sex offenders from reoffending (e.g. incarceration), and… Read More »Video Series: Tom Douglas Defends the Chemical Castration of Sex Offenders

Should PREDICTED Smokers Get Transplants?

By Tom Douglas

Jack has smoked a packet a day since he was 22. Now, at 52, he needs a heart and lung transplant.

Should he be refused a transplant to allow a non-smoker with a similar medical need to receive one? More generally: does his history of smoking reduce his claim to scarce medical resources?

If it does, then what should we say about Jill, who has never touched a cigarette, but is predicted to become a smoker in the future? Perhaps Jill is 20 years old and from an ethnic group with very high rates of smoking uptake in their 20s. Or perhaps a machine-learning tool has analysed her past facebook posts and google searches and identified her as a ‘high risk’ for taking up smoking—she has an appetite for risk, an unusual susceptibility to peer pressure, and a large number of smokers among her friends. Should Jill’s predicted smoking count against her, were she to need a transplant? Intuitively, it shouldn’t. But why not?

Read More »Should PREDICTED Smokers Get Transplants?

Tongue Splitting, Nipple Excision, And Ear Removal: Why Prosecute The Operator But Not The Customer?

By Charles Foster

Image: ‘Split tongue: procedure, safety, result’: Tattoo World: Standard YouTube licence.

The appellant in R v BM was a tattooist and body piercer who also engaged in ‘body modification’. He was charged with three offences of wounding with intent to do grievous bodily harm. These entailed: (a) Removal of an ear; (b) Removal of a nipple; and (c) division of a tongue so that it looked reptilian. In each case the customer had consented. There was, said the appellant, no offence because of this consent.

Where an adult decides to do something that is not prohibited by the law, the law will generally not interfere.

In Schloendorff v Society of New York Hospital (1914) 105 NE 92 Cardozo J said:

“Every human being of adult years and sound mind has a right to determine what shall be done with his own body.”[1]

This principle has been fairly consistently recognised in the English law.[2] Thus, for instance, In In re T (Adult: Refusal of Treatment, Butler-Sloss LJ cited with approval this section of the judgment of Robins JA in Malette v Shulman[3]:

‘The right to determine what shall be done with one’s own body is a fundamental right in our society. The concepts inherent in this right are the bedrock upon which the principles of self-determination and individual autonomy are based. Free individual choice in matters affecting this right should, in my opinion, be accorded very high priority.’Read More »Tongue Splitting, Nipple Excision, And Ear Removal: Why Prosecute The Operator But Not The Customer?

Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting… Read More »Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

‘Being a burden’: an Illegitimate Ground For Assisted Dying

The issue of the legality in England and Wales of physician-assisted suicide has recently been revisited by the Divisional Court. Judgment is awaited. The judgment of the Court of Appeal, granting permission for judicial review, is here.

The basic issue before the Court of Appeal was the same as that in Nicklinson v Ministry of Justice and R (Purdy) v DPP: does the right to determine how one lives one’s private life (protected by Article 8 of the European Convention on Human Rights)  confer a right to have an assisted death?

Many factors have been said to be relevant to decisions about assisted dying. They include intractable pain (rather a weak criterion, given modern palliative methods), hopeless prognosis – likely to result in death in a short time –  and simple autonomy (‘It’s my right to determine where, when, and in what circumstances I end my life, and that’s an end of the matter’). One factor, commonly in the minds of patients asking for help in ending their lives, but rarely mentioned by advocates of assisted dying, is that the patient feels that she is a burden to her family and carers.Read More »‘Being a burden’: an Illegitimate Ground For Assisted Dying

Does Female Genital Mutilation Have Health Benefits? The Problem with Medicalizing Morality

Does Female Genital Mutilation Have Health Benefits? The Problem with Medicalizing Morality

By Brian D. Earp (@briandavidearp)

 

Four members of the Dawoodi Bohra sect of Islam living in Detroit, Michigan have recently been indicted on charges of female genital mutilation (FGM). This is the first time the US government has prosecuted an “FGM” case since a federal law was passed in 1996. The world is watching to see how the case turns out.

A lot is at stake here. Multiculturalism, religious freedom, the limits of tolerance; the scope of children’s—and minority group—rights; the credibility of scientific research; even the very concept of “harm.”

To see how these pieces fit together, I need to describe the alleged crime.

Read More »Does Female Genital Mutilation Have Health Benefits? The Problem with Medicalizing Morality

The non-identity problem of professional philosophers

By Charles Foster

Philosophers have a non-identity problem. It is that they are not identified as relevant by the courts. This, in an age where funding and preferment are often linked to engagement with the non-academic world, is a worry.

This irrelevance was brutally demonstrated in an English Court of Appeal case,  (‘the CICA case’) the facts of which were a tragic illustration of the non-identity problem.Read More »The non-identity problem of professional philosophers

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas

 

The Durham Police force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART has not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper. Here we address one that it is particularly apposite with respect to HART: how should we balance the conflicting moral values at stake in deciding the kind of accuracy we want such tools to prioritise?

Read More »Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Video Series: Tom Douglas on Using Neurointerventions in Crime Prevention

Should neurointerventions be used to prevent crime? For example, should we use chemical castration as part of efforts to prevent re-offending in sex offenders? What about methadone treatment for heroin-dependent offenders? Would offering such interventions to incarcerated individuals involve coercion? Would it violate their right to freedom from mental interference? Is there such a right?… Read More »Video Series: Tom Douglas on Using Neurointerventions in Crime Prevention