It was announced last week that a new offence of ‘wilful neglect or mistreatment’ is to be created for NHS hospital staff whose conduct amounts to the deliberate or reckless mistreatment of patients. This offence will be modeled on an existing offence under the Mental Capacity Act which punishes the wilful neglect or ill-treatment of patients lacking capacity. Currently, a medical worker convicted of this offence faces a maximum sentence of five years imprisonment, or an unlimited fine. The sanctions for the proposed new offence are likely to be of a similar severity.
The creation of the offence comes in the wake of the inquiry into the widespread negligence that occurred at Mid Staffordshire hospital. Intended principally to deter healthcare workers from mistreating patients, the new offence has been proposed following review of patient safety. The leader of the review, Professor Don Berwick, emphasized that patient safety must become the top priority and that the measure was needed to target the worst cases of a ‘couldn’t care less’ attitude that led to ‘wilful or reckless neglect or mistreatment’.
Concerns about its impact
Whilst most would agree that patient safety should clearly be a priority, there has been concern that the new criminal sanction could create a ‘climate of fear’ amongst healthcare workers and that individual workers will be penalised for mistakes that are the result of inadequate staffing or simple human error, rather than blameworthy acts of malice. Continue reading
There has been much discussion this week about whether Thorpe Park’s ‘Asylum’ maze perpetuates the stigma that sometimes surrounds mental illness. The live action horror maze is an attraction that has opened for Halloween for the last eight years. Replete with special effects, its interior is set up to look like the intermittently-lit corridors of a dilapidated hospital. As the maze-goers try to find their way through the corridors, actors dressed as ‘patients’ jump out, scare and chase them until they find the exit. You can get a sense of the maze here.
Polls have been set up to gauge the public response to the maze and petitions started in an attempt to get Thorpe Park to close it down. Having set up a poll on Twitter, Paul Jenkins, the chief executive officer of the charity Rethink Mental Illness has been quoted as saying ‘While of course there’s nothing wrong with a bit of Halloween fun, explicit references to ‘patients’ crosses a line and reinforces damaging stereotypes about mental illness.’ Continue reading
Last week, the Daily Mail reported on Dr Anna Smajdor’s paper in which she argues that compassion ‘is not a necessary component’ of healthcare. This claim contrasts interestingly with Jeremy Hunt’s recent proposal that all student nurses should have to prove that they are capable of caring by spending a year on wards carrying out basic tasks. This proposal, along with the suggestion that pay be linked to levels of kindness would, according to Hunt, go some way to improving the standard of NHS care. The motivating idea behind Hunt’s proposals is that lack of compassion amongst NHS staff is partly responsible for poor care and, in some cases, for cultivating a ‘culture of cruelty’.
So is compassion a necessary component of healthcare? Is an adequate standard of care necessarily unattainable when compassion amongst staff is absent? In considering these questions I do not intend to embark on a detailed critique of Dr Smajdor’s paper. Instead, I will begin from her main ideas and use them to motivate a general discussion of the role of compassion in healthcare. According to the report, Dr Smajdor argues for two main claims: 1) that compassion is not a necessary component of healthcare – that acceptable standards can be attained without it – and 2) that compassion can actually be dangerous for healthcare workers, possibly resulting in impaired standards of care. Continue reading
The government is currently consulting on whether the maximum sentences for aggravated offences under the Dangerous Dogs Act 1991 should be increased. This offence category covers cases in which someone allows a dog to be dangerously out of control and the dog injures or kills a person or an assistance dog. Respondents to the survey can indicate whether they want tougher penalties for these sorts of cases. The suggested range of penalties for injury to a person – as well as death or injury of a guide dog – are three, five, seven or 10 years in prison. In relation to cases involving the death of a person, the respondent is asked: “Which of the following options most closely resembles the appropriate maximum penalty: seven years, 10 years, 14 years or life imprisonment?”
Given that the current maximum sentence for cases involving death is two years in prison, changing the law to match any of these options would represent a significant increase in the severity of the sanction. Whilst the current two-year maximum has understandably struck many as too low, it is important that those responding to the consultation — and those revising the law it is intended to inform — think carefully about the principles that would justify an increase. Continue reading
Last week, Canadian researchers published a study showing that some modern slot machines ‘trick’ players – by way of their physiology – into feeling like they are winning when in fact they are losing. The researchers describe the phenomenon of ‘losses disguised as wins’, in which net losses involving some winning lines are experienced in the same way as net wins due to physiological responses to the accompanying sounds and lights. The obvious worry is that players who are tricked into thinking they’re winning will keep playing longer and motivate them to come back to try again.
The game set up is as follows: players bet on 15 lines simultaneously, any of which they might win or lose. A player will accrue a net profit if the total amount collected from all winning lines is greater than the total amount wagered on all 15 lines. Such an outcome is accompanied by lights and sounds announcing the wins. However, lights and sounds will also be played if any of the lines win, even if the net amount collected is less than the total amount wagered on all 15 lines. If a player bets 5 credits per line (5 x 15 = 75) and wins 10 back from 3 (= 30), then the player has actually lost money, even though the lights and sounds indicate winning. The loss, the researchers claim, is thus disguised as a win. Continue reading
A recent study has shown that a person’s implicit racial bias can be reduced if she spends some time experiencing her body as dark-skinned. Psychologists in Spain used an immersive virtual reality technique to allow participants to ‘see’ themselves with a different skin colour. They measured the participants’ implicit racial bias before and after the intervention, finding that the embodiment of light-skinned individuals in a dark-skinned virtual body at least temporarily reduced their implicit bias against people who are coded as ‘out-group’ on the basis of skin colour.
Implicit racial bias is an evolved, unconscious tendency to feel more positively towards members of one’s own race (one’s ‘in-group’) than towards members of a different race (members of an ‘out-group’). The bias can be (and was in this study) measured using a version of the implicit association test, which requires participants to quickly catagorise faces (black or white) and words (positive or negative) into groups. Implicit bias is calculated from the differences in speed and accuracy between categorising (white faces, positive words) and (black faces, negative words) compared to (black faces, positive words) and (white faces, negative words). Crucially, implicit racial bias has been shown to be uncorrelated with explicit racial bias – self-reports of negative racial stereotypes. This means that even those who are not consciously averse to people from other racial groups often demonstrate a deep-seated bias against them as an evolutionary hangover. Hearteningly, the authors of the study started from the idea that encoding people by race may be a reversible by-product of human evolution used to detect coalitional alliances. What their study confirmed is that immersive virtual reality provides a powerful tool for placing people into a different race ‘coalition’ by changing their body representation and consequently reducing their implicit aversion to the racial characteristics there represented. Continue reading
Channel 4 was censured by Ofcom this week for cutting to a light-hearted sponsorship advert just after viewers had watched the particularly graphic and disturbing rape scene in the film The Girl with the Dragon Tattoo. The Phones 4 U sponsorship ad was thought to be especially inappropriate for that moment as it features a couple apparently having sex, during which the woman pauses and asks to the camera ‘I’m faking it, can I upgrade’? Ofcom received 17 complaints about the timing of the advert and this week concluded that ‘the juxtaposition of a light-hearted sponsorship credit featuring a woman during sex with a disturbing and distressing rape scene in a film was clearly unsuitable… In Ofcom’s view this clearly had the potential to be offensive to viewers’.
The timing was clearly unfortunate, but to say that the juxtaposition was offensive is a stronger claim. Of course, the psychological effect of being immersed in a violent scene at one moment and then confronted with the same(ish) subject matter presented trivially will not do much for the viewer’s aesthetic experience. But the regulator’s suggestion seemed not only to be that the juxtaposition detracted from the viewer’s enjoyment, but also that it was in some way wrong. Continue reading
Some researchers in the US recently conducted an ‘experiment in the law as algorithm’. (One of the researchers involved with the project was interviewed by Ars Technia, here.) At first glance, this seems like quite a simple undertaking for someone with knowledge of a particular law and mathematical proficiency: laws are clearly defined rules, which can be broken in clearly defined ways. This is most true for strict liability offences, which require no proof of a mental element of the offence (the mens rea). An individual can commit a strict liability offence even if she had no knowledge that her act was criminal and had no intention to commit the crime. All that is required under strict liability statutes is that the act itself (the actus reus) is voluntary. Essentially: if you did it, you’re liable – it doesn’t matter why or how. So, for strict liability offences such as speeding it would seem straightforward enough to create an algorithm that could compare actual driving speed with the legal speed limit, and adjudicate liability accordingly.
This possibility of law as algorithm is what the US researchers aimed to test out with their experiment. They imagined the future possibility of automated law enforcement, especially for simple laws like those governing driving. To conduct their experiment, the researchers assigned a group of 52 programmers the task of automating the enforcement of driving speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour-long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets. Continue reading
Yesterday, Charles Foster discussed the recent study showing that Facebook ‘Likes’ can be plugged into an algorithm to predict things about people – things about their demographics, their habits and their personalities – that they didn’t explicitly disclose. Charles argued that, even though the individual ‘Likes’ were voluntarily published, to use an algorithm to generate further predictions would be unethical on the grounds that individuals have not consented to it and, consequently, that to go ahead and do it anyway is a violation of their privacy.
I wish to make three points contesting his strong conclusion, instead offering a more qualified position: simply running the algorithm on publically available ‘Likes’ data is not unethical, even if no consent has been given. Doing particular things based on the output of the algorithm, however, might be. Continue reading
We all know that we are under CCTV surveillance on many occasions each day, particularly when we are in public places. For the most part we accept that being – or potentially being – watched in public places is a reasonable price to pay for the security that 24-hour surveillance offers. However, we also have expectations about what is done with CCTV footage, when, and by whom. A recent discussion with a friend threw up some interesting questions about the nature of these expectations and their reasonableness.
My friend works in a bar where, unsurprisingly, there are several CCTV cameras. Everyone knows where these cameras are and that they are permanently in operation – there is not supposed to be any secrecy. Whilst the primary purpose of the cameras is to promote security, a member of the management team has begun to use them in a way that could be seen as ethically problematic: she logs on to view the footage in real-time, remotely, at her home. In addition to watching the footage, the manager has also addressed points of staff discipline based on what she sees. Perhaps particularly troubling is that she has commented on the way a member of staff behaved when no one was around – when the member of staff thought that she was ‘alone’. Continue reading