The dating site OKCupid displays a message to visitors using the web browser Firefox asking them to change browser, since “Mozilla’s new CEO, Brendan Eich, is an opponent of equal rights for gay couples”. The reason is that Eich donated $1,000 to support Proposition 8 (a California ban on same sex marriages) six years ago. He, on the other hand, blogs that he is committed to make Mozilla an inclusive place and that he will try to “show, not tell” in making it so. The company at large is pretty firmly on the equality side in any case.
Will the technologisation of boycotting lead to consumer pressure being applied in a better way?
Things I’ve learned (so far) about how to do practical ethics
I had the opportunity, a few months back, to look through some old poems I’d written in high school. Some, I thought, were pretty good. Others I remembered thinking were good when I wrote them, but now they seem embarrassingly bad: pseudo-profound, full of clichés, marked by empty rhetoric instead of meaningful content. I’ve had a similar experience today with my collection of articles here at the Practical Ethics blog. And Oh, the things I have learned!
Here are just a few of the lessons that have altered my thinking, or otherwise informed my views about “doing” practical ethics — particularly in a public-engagement context — since my very first blog post appeared in 2011:
A Dutch program pays chronic alcoholics in beer for cleaning the streets and parks. A Canadian homeless shelter provides their alcohol clients with six ounces of white wine every 90 minutes. Giving alcohol to alcoholics, it seems counterproductive from a ‘just say no’ perspective, but I would like to argue that it makes sense on many levels.
The strongest case for giving alcohol to people with chronic alcohol dependence is based on the principle of ‘harm reduction’. Canadian ‘wet-shelter’ programs have emerged for two main reasons. The first is that many homeless shelters are abstinence based which means inveterate drinks would continue to sleep rough, even in freezing winter months, resulting in tragic deaths. The second reason is that chronic inebriates often consume non-beverage alcohol like hand sanitizer, mouth wash and aftershave thereby exacerbating already severe health problems. A recent study by the Centre for Addictions Research found that a “managed alcohol program” approach reduced emergency hospital visits and arrests among participants at the Kwae Kii Win Centre Managed Alcohol Centre by 40-80%. Significant changes among program participants included an improvement in accommodation renewed contact with their families, and better diet. Whilst participants still receive their alcohol throughout the day the alcohol is given by staff in controlled doses at fixed intervals. The dose is enough to prevent withdrawal symptoms, but not high enough to cause intoxication. Although there are many formal harm reduction programs for heroin users, it is less common for people who are alcohol dependent, despite the fact that withdrawing from alcohol can be lethal. Continue reading
A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.
Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.
The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:
In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.
The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading
Follow Rebecca on Twitter here
I’m working on a paper entitled ‘Cyborg justice: punishment in the age of transformative technology’ with my colleagues Anders Sandberg and Hannah Maslen. In it, we consider how punishment practices might change as technology advances, and what ethical issues might arise. The paper grew out of a blog post I wrote last year at Practical Ethics, a version of which was published as an article in Slate. A few months ago, Ross Andersen from the brilliant online magazine Aeon interviewed Anders, Hannah, and me, and the interview was published earlier this month. Versions of the story quickly appeared in various sources, beginning with a predictably inept effort in the Daily Mail, and followed by articles in The Telegraph, Huffington Post, Gawker, Boing Boing, and elsewhere. The interview also sparked debate in the blogosphere, including posts by Daily Nous, Polaris Koi, The Good Men Project, Filip Spagnoli, Brian Leiter, Rogue Priest, Luke Davies, and Ari Kohen, and comments and questions on Twitter and on my website. I’ve also received, by email, many comments, questions, and requests for further interviews and media appearances. These arrived at a time when I was travelling and lacked regular email access, and I’m yet to get around to replying to most of them. Apologies if you’re one of the people waiting for a reply.
I’m very happy to have started a debate on this topic, although less happy to have received a lot of negative attention based on a misunderstanding of my views on punishment and my reasons for being interested in this topic. I respond to the most common questions and concerns below. Feel free to leave a comment if there’s something important that I haven’t covered. Continue reading
This month an article published in the American Journal of Public Health (AJPH) outlined the results of a study on self-harm amongst jail inmates in New York City. Data on all jail admissions between January 2010 and October 2012 was analysed and the authors noted the following: “We found that acts of self-harm were strongly associated with assignment of inmates to solitary confinement. Inmates punished by solitary confinement were approximately 6.9 times as likely to commit acts of self-harm after we controlled for length of jail stay, SMI [serious mental illness], age, and race/ethnicity.”
This research provides an interesting springboard for a discussion. Can solitary confinement ever be justified, and if so, in what circumstances? Continue reading
Neurofeedback works like this: you are hooked up to instruments that measure your brain activity (usually via electroencephalography or functional magnetic resonance imaging) and feed it back to you via auditory or visual feedback. The feedback represents the brain activity, and gives you a chance to modulate it, much as you might modulate the movements of your hand given visual or haptic feedback about its activity. What is interesting about the use of neurofeedback is it appears to train people to exercise some control over brain activity related to cognitive and mood-related processes. In other words, neurofeedback might potentially allow agents to modify the activity in their brains such that mood, attentional capacity, and other mental functions improve. Continue reading
There are reports in the press this week that the remains of 86 unborn fetuses were kept in a UK hospital mortuary for months or even years longer than they should have been. The majority were fetuses less than 12 weeks gestation. According to the report, this arose because of administrative error and a failure to obtain the necessary permissions for cremation.
The hospital has publicly apologized, and set up an enquiry into the error. They are planning to cremate the remaining fetuses. However, they have decided not to contact all of the families and women whose fetal remains were kept on the basis that this would likely cause a greater amount of distress.
Is this the right approach? Guidelines and teaching in medical schools encourage health-care professionals and institutions to own up to their errors and disclose them to patients. Is it justifiable then to not reveal errors on the grounds that this would be too upsetting? How much transparency is desirable in healthcare?
Last summer, on this blog, Rebecca Roache suggested several ways in which technology could enhance retributive punishment—that is, could make punishment more severe—without “resorting to inhumane methods or substantially overhauling the current UK legal system.” Her approbation of this type of technological development has recently been reported in the Daily Mail, and reaffirmed in an interview for Aeon Magazine.
Roache’s original post was, at least, a response to the sentencing of the mother and stepfather of Daniel Pelka, who was four when he died as a result of a mixture of violence and neglect perpetrated by his parents. They each received the maximum sentence possible in the UK, a minimum of thirty years in prison before the possibility of parole is discussed (and even then they might not get it). This sentence, Roache wrote, was “laughably inadequate.” Continue reading