You are a public health official responsible for the purchasing of medications for the hospitals within your catchment area in the NHS. Your policies significantly affect which, out of the serpentine lists of heart disease medications, for example, are available to your patients. Today, you must choose between purchasing one of three heart disease medications: Drug A, Drug B, and Drug C. They are pretty similar in efficacy, and all three have been being used for many years. Drug B is slightly less expensive than Drug A and Drug C, but there is emerging evidence that it increases the likelihood that patients will take “bad bets,” i.e. make large gambles when the chance of winning is low (and thus might contribute to large social costs). Drug C costs a tiny bit more than Drug A, but there is some evidence that Drug C may help decrease implicit racial bias. You have been briefed on the research suggesting that implicit racial bias can lead to people making choices that consistently and unintentionally limit the opportunities of certain groups, even when all the involved parties show explicit commitments to social equality. Finally, there is emerging evidence that drug A both helps people abstain from alcohol and dissociates negative emotional content from memories.
Which drug should you purchase?
Let us begin to think about this question through the lens of the idea of the “Nudge,” which has exploded onto the public sphere (and blogosphere) since Thaler and Sunstein’s published their book, “Nudge: improving decisions about health, wealth, and happiness.” (see the blog here). I briefly and incompletely introduce nudges here, in hopes that we may soon move on to discuss the kind of “nudge drugs” our thought experiment considers.
“Shyness, bereavement and eccentric behaviour could be classed as a mental illness under new guidelines, leaving millions of people at risk of being diagnosed as having a psychiatric disorder, experts fear,”
reads the title of a news article earlier this month in the wake of the publication of the most recent draft of the American Psychiatric Association’s proposed revision to the Diagnostic and Statistical Manual of Mental Disorders (DSM), which is used as a handbook for psychiatrists in the United States.
With this blog post, I hope that we can begin a discussion of a) the reasons undergirding fears of being “diagnosed as having a psychiatric disorder” and b) whether – counter intuitively – there might be a moral reason to include common behaviors in the DSM, because doing so might help us avoid these feared consequences.
The airwaves buzzed last week on BBC radio about biological predispositions towards violence, brain-based lie detection systems, tumors associated with pedophilia, and psychopaths. The BBC looked to the Neuroethics Centre’s own Walter Sinott-Armstrong for his perspective on neuroscience in law in light of the release of the Royal Society’s recent report on the topic (on which he acted as a reviewer). The short and sweet BBC podcast can be found here (the segment on NeuroLaw begins at 12:52). While much of the debate so far has focused on the dangers neuroscience might bring to the legal system and therefore on caution in the adoption of neuroscience in legal settings, Walter Sinott-Armstrong pointed out that the potential to help is also huge. Neuroscience investigating the brain networks active in chronic pain could help build evidence that someone is suffering chronic pain. It might compliment actuarial risk estimates to help better estimate future dangerousness when offenders are up for parole (an area where expert opinion by psychologists is notorious wrong 2 out of 3 times). And it may help identify cases of shaken-baby syndrome. And with this potential, it raises the intriguing question: do we have a responsibility to use neuroscience in law?
Yesterday evening in front of a record audience in the OxfordMartinSchoolbuilding, Dr. Molly Crockett delivered the Wellcome Lecture in Neuroethics: “Moral enhancement? Evidence and challenges” (a podcast of the lecture will soon appear in the events archives here)
In her engaging talk, Dr Crockett spoke of the emerging body of neuroscience research she and others have been conducting on neurobiological modifiers of moral behavior and how manipulations in neurotransmitter systems can affect that moral behavior.
For example, in a study where subjects were presented with two classic trolley problems, whether they had previously received an antidepressant that increased the availability of the neurotransmitter, serotonin, in the neuronal synapse (in this case, a Selective Serotonin Reuptake Inhibitor – SSRI) significantly shifted peoples decisions into a deontological, as opposed to consequentialist framework. Namely, the group that had received the SSRI was less likely to say it was ok to push a very large man off of a bridge in front of a trolley in order to save five workers who would certainly otherwise die.
From a deontological point of view, this increased aversion to harming others after taking the SSRI might be thought of as a moral enhancement, but might be thought of as impairment to a consequentialist.
In September 2011 ,the most advanced computer game to use a consumer brain computer interface (BCI) will go on sale. Its name is Focus Pocus (see video trailer here, its awesome) and it is aimed at children with ADHD so that they might use gamification to train their brains to improve focus and impulse control.
The game is based on neurofeedback enabled by the use of the Neurosky dry-electrode EEG (Electro-EncephaloGram) headset, which anyone can purchase for under $100 (or 100 Euros if in Europe) Earlier this week, BBC2 did a special on the headset. The basic Idea is that the single electrode on the Neurosky headset (placed on the forehead) is able to pick up a few simple and characteristic brainwaves (created by activity in populations of neurons), some that have been shown to be enriched when the subject is awake and attentive (ex. Beta-waves), and some when the subject is relaxed (ex. alpha waves). Neurosky has developed algorithms to funnel these and other brain waves into measures of “focus” and “meditation.” Look here for more details on how it works.
I was just in LA. I was surprised and pleased when a good friend of mine mentioned this brilliant new transportation scheme the city had developed. Basically, with sponsorship from a few businesses the city had placed hundreds of electric cars at street-side parking-spots (where the car batteries recharged) throughout the most frequented neighborhoods. The idea was that anyone (tourist or city-dweller) could rent the electric car by the half-hour, paying by card at a nearby pay station. Then the renter could bring the electric car back to any of the city parking spots by the set time. What was even more convenient was that city-dwellers could get an “LA-Car Card” by paying a nominal fee that would allow them to take out electric cars for up to a half hour at a time for free! Environmental AND convenient! So of course I went straight to the nearest electric car parking-spot, paid the fee, and was soon zooming about the streets of LA. I had never driven an electric car before, so it felt a bit odd, but I was so excited by the new car scheme that I didn’t let it bother me. Everything was grand until a reckless driver ran a red light in front of me and nearly took me off the road. Thankfully it was only nearly. But with that close-call, I realized that the reason the car had felt a bit odd is that the car had NO SEATBELTS! That’s right: no seatbelts. What would have happened if I had been in a car accident?
These electric cars were environmentally friendly, yes. Super-convenient, yes. But wasn’t it irresponsible for the city to encourage needless risk-taking (and traumatic brain injury) by providing this transport without the most basic safety feature?? For worried as I was about my safety in not having a seat-belt (and I would never drive my own car without my seat-belt), I found myself renting and driving the cars for the rest of my stay just because they were so darn convenient. I oscillated between decrying the city’s irresponsible behavior and applauding their creation of such a convenient scheme. Which was the proper stance? And was there a rational reason why these cars did not have seat-belts??
George Orwell was not a peace journalist; he was a proper journalist!
Jean Seaton, professor of media history at the university of Westminster and official historian for the BBC, hurled the comment from her seat in the audience onto the stage, interrupting the current speaker, Richard Keeble, professor at the University of Lincoln’s school of journalism. Keeble’s passing claim on George Orwell in last Saturday’s OxPeace conference on “Media in Conflict and Peace building” (recordings of the talks will shortly be available on OUCS iTunes) visibly (and audibly) upset Seaton, who was present also as a speaker.
Why did Seaton treat the title of “peace journalist” as an insult?
Matthew L Baum
Round 1: Baltimore
I first heard of the Malleus Maleficarum, or The Hammer of Witches, last year when I visited Johns Hopkins Medical School in Baltimore, MD, USA. A doctor for whom I have great respect introduced me to the dark leather-bound tome, which he pulled off of his bookshelf. Apparently, this aptly named book was used back in the day (published mid-1400s) by witch-hunters as a diagnostic manual of sorts to identify witches. Because all the witch-hunters used the same criteria as outlined in The Hammer to tell who was a witch, they all –more or less- identified the same people as witches. Consequently the cities, towns, and villages all enjoyed a time of highly precise witch wrangling. This was fine and good until people realized that there was a staggering problem with the validity of these diagnoses. Textbook examples (or Hammer-book examples) these unfortunates may have been, but veritable wielders of the dark arts they were not. The markers of witchcraft these hunters agreed upon, though precise and reliable, simply were not valid.
In a paper published this last Friday, university of Oxford researchers showed that electrical stimulation may help people learn numbers faster (see Julian Savulescu’s post for Why Bioenhancement of Mathematical Ability Is Ethically Important). In the experiment, 20 min of stimulation to the right parietal lobe, a brain region shown previously to be important in numerical ability, caused subjects to faster learn the relative magnitudes ( i.e. which is bigger) of a set of nonsense symbols. Proper magnitude of each symbol was assigned beforehand by the experimenters.
When Cardiff University’s Christopher Chambers was reached for comment in the BBC article covering this paper, “he said that the study did not prove that the learning of maths skills was improved, just that the volunteers were better at linking arbitrary numbers and symbols and warned that the researchers needed to make sure other parts of the brain were unaffected.” Dr Chambers has an important cautionary point, but could learning the magnitudes of arbitrary symbols alone be beneficial? And how much would it matter if other brain areas were affected?