Much of the discussion about biomedical enhancements is about arguing whether some biomedical enhancement would, or would not be a good, ethical, or efficient means for enhancing a particular human characteristic. In this blog and in other bioethical literature bioethicists discuss the proposed effects that biomedical enhancements would have, for example, to intelligence and other cognitive capacities, empathy, sunny mood, altruism, sense of justice, or to halting climate change. The list is extensive and endless. The discussion on efficacy, ethics, justice, and human nature is an important part of the whole philosophical debate, as is the discussion about the limits of philosophy, reality, and science fiction. However, an important point that might be in need of emphasis would be to take under inspection the very concepts that are the target of enhancement. What do intelligence, sunny mood, altruism, sense of justice, and the-characteristics-that-prevents-us-halting-climate-change really mean?
If the target characteristics are looked at carefully, it seems that much of the discussion can be described as a form of language bewitchment where conceptions of the human language and conceptions of the empirical science of biology are mixed. Just because the human language includes conceptions such as intelligence, altruism, sunny mood, criminal, and sense of justice, it does not mean that there would be any corresponding concrete physical entities to these conceptions.
On October 30th, Professor Walter Sinnott-Armstrong of Duke University gave the 2014 Wellcome Lecture in Neuroethics. His talk, “Implicit Moral Attitudes”, concerned the practical and theoretical implications of recent empirical research into unconscious or sub-conscious beliefs or associations.
Recordings of his talk will be made available soon Audio recording of the talk is available here: http://media.philosophy.ox.ac.uk/neuro/MT14_WLN_WSA.mp3. For those interested that were unable to attend, I will summarise the main points of Sinnott-Armstrong’s talk and some of the discussion that occurred during the Q&A afterwards. Continue reading
Guest Post: Alexander Andersson, MA student in practical philosophy, University of Gothenburg
In Unfit for the Future: The Need for Moral Enhancement, Ingmar Persson and Julian Savulescu argue that we, as a human race, are in deep trouble. According to the authors, global-warming, weapons of mass destruction, poverty, famine, terrorists, and even liberal democracies candidate as components in our potential apocalypse. These issues that we are facing require us to be able to make the morally right decisions, however, our current moral deficiencies seem to prevent us from making those decisions. As the authors put it:
[H]uman beings are not by nature equipped with a moral psychology that empowers them to cope with the moral problems that these new conditions of life create. Nor could the currently favoured political system of liberal democracy overcome these deficiencies. (Persson & Savulescu, 2012, p. 1)*
It is therefore desirable to look for means or solutions to get rid of these deficiencies, which in turn would make us morally better persons, thus allowing us to avoid the disastrous situations which otherwise lies ahead. Luckily, Persson and Savulescu do not seem to suffer from moral deficiency, which enables them to put forth a creative plan to save the day.
There could be increased numbers of psychopaths in senior managerial positions, high levels of business: a paper in Journal of Forensic Psychiatry & Psychology has demonstrated that smart psychopaths are hard to detect as psychopaths. The authors tested participants for psychopathic tendencies using a psychological scale, and then tested their arousal levels through galvanic skin response while showing normal or upsetting images. The interesting finding was that only lower IQ participants showed the expected responses (lowered startle when viewing aversive images in psychopaths): smarter participants seemed to be able to control their emotions.
The lead author, Carolyn Bate, said:
“Perhaps businesses do need people who have the same characteristics as psychopaths, such as ruthlessness. But I suspect that some form of screening does need to take place, mainly so businesses are aware of what sort of people they are hiring.”
Should we screen people at hiring for psychopathy?
It’s still summery, and so here is a little story for the beach or the side of the pool
‘There are challenges, certainly’, said the Boss. ‘But we’re confident that we can meet them. Or at least’, he went on, looking over his glasses for signs of dissent, ‘for a critical mass of stakeholders’.
A graph appeared on the screen at his side. He traced its lines with a red laser dot.
‘Here’, he said, ‘we have the expected rise of temperature with time. And here’ (he stabbed with the dot, as if doing the killing himself), ‘we have the consequent reduction in human population – assuming’ (and he held up a schoolmasterly finger), ‘we don’t have any HR66.’
He sipped some water, and waited for this to sink in. It did.
‘But don’t worry’, he said. ‘There’s good news. We do have HR66. Not enough for everyone, sadly, but enough to ensure that the human baton is passed on. And enough, I’m glad to say, for everyone in this room.’
There was a ripple of relief.
‘And their families, of course’, the Boss continued. ‘Families are very important to us. But all this assumes that you want to have the HR66. No one will make you. But, frankly, what’s not to like? You take a single dose, and you survive. If you don’t take it, you don’t survive. It’s as simple as that. It even tastes of candy floss. It has only one side-effect, and that’s a wholly good thing. It increases – increases, mark you – your IQ. Very, very significantly. By about 100 points, in fact. Not only will you be alive; you’ll be a genius beside whom Einstein would have seemed a hopeless retard.’
One more press of the button, and up flashed the logo of the corporation that manufactured HR66. The Boss didn’t think it relevant to mention his shareholding.
‘Naturally’, said the Boss, ‘we have to vote for this in the usual way. Yes, humanity’s facing apocalypse, and there’s one, and only one way out. But we’ve still got to do things properly. But I expect that we can move to a vote now, can’t we?’
‘I’m sure we can’, agreed the Deputy. ‘You’ve all seen the motion. All those in favour….’
The Boss and the Deputy, up on the podium, stared. Everyone else turned. A little man in tweed lisped through a badger’s beard. ‘I’d like some clarification, please.’
‘But of course, Tom’, said the Boss, magnanimous and desperately alarmed. ‘Anything you like.’
No one really knew how Tom had got into the government, or why he wanted to be there. He had no strategically significant connections, no dress sense, no publications other than some monographs on moths and mediaeval fonts, no assets other than a dumpy wife, some anarchic, unwashed children and a small cottage on Dartmoor, and no entries in the Register of Members’ Interests apart from ‘Masturbation’. This entry had caused a terrible storm. He’d been accused of injuring the dignity of the House, but, after expensive legal advice had been taken, it had been ‘reluctantly concluded’ that there was no power to force him to remove it.
‘I’d like to know’, said Tom, ‘who’s going to get the drug. And why them rather than anyone else.’ Continue reading
Subtly designing people’s choice environment in a way that they decide for a desired cause of action – so called “nudging” – receives growing interest as a potential tool for practical ethics. New psychological research suggests a surprisingly simple, but potentially powerful strategy to nudge people.
By Kimberly Schelle & Nadira Faulmüller
Horizon 2020, the European Union’s 2014-2020 largest research programme ever, includes the call to pursue ‘Responsible Research and Innovation’ (RRI). RRI stands for a research and innovation process in which all societal actors (e.g. citizens, policy makers, business and researchers) are working together in the process to align the outcomes with the values, needs, and expectations of the European Society. In a recently published paper on the importance of including the public and patients’ voices in bioethical reasoning, the authors describe, although in other words, the value of the RRI approach in bioethical issues:
“A bioethical position that fails to do this [exchange with the public opinion], and which thus avoids the confrontation with different public arguments, including ones perhaps based in different cultural histories, relations and ontological grounds […], not only runs the risk of missing important aspects, ideas and arguments. It also arouses strong suspicion of being indeed one-sided, biased or ideological—thus illegitimate.”
Recently a neuroscientist discovered he was a psychopath. He was studying the brain scans of psychopaths, and intended to use some brain scans of family members and one of himself for the control group. Now one of the brain scans from the control group show clear signs of psychopathy, so he thought he must have misplaced it. He checked the reference number, and found out it was his own brain! This came as a total surprise to him, he never showed any signs of psychopathy, yet, he was very convinced that if his brain scan showed similarities with that of psychopaths, he must be a psychopath himself. Retrospectively his wife admitted that she thought he had some of the signs like lacking in empathy, and he found some famous murderers in his family. Instead of hiding this intimate fact about himself, he wrote a book about it, showing how amazing brain scans are. His book argued that brain scans can detect a psychopath like him, who never had any compelling symptoms of psychopathy. Continue reading
Last week, we held an expert workshop with key stakeholders to discuss our recent Oxford Martin School policy paper. Our policy paper put forward proposals for how we thought cognitive enhancement devices such as brain stimulators should be regulated. At present, if these sorts of devices do not make medical treatment claims (but instead claim to make you smarter, more creative or a better gamer, say) then they are only subject to basic product safety requirements. In our paper we suggested that cognitive enhancement devices should be regulated in the same way as medical devices and discussed how this could be implemented. Indeed, the devices that are being sold for enhancement of cognitive functions use the very same principles as devices approved by medical device regulators for research into the treatment of cognitive impairment or dysfunction associated with stroke, Parkinson’s disease and depression (amongst other conditions). Being the same sorts of devices, acting via similar mechanisms and posing the same sorts of risks, there seemed to be a strong argument for regulation of some form and an equally strong argument for adopting the same regulatory approach for both medical and enhancement devices.
Having published our paper, we were very keen to hear what people more closely involved in making policy and drafting legislation thought of our proposals. Individuals from the Medical and Healthcare Products Regulatory Agency, the EU New and Emerging Technologies Working Group, a medical devices company, the Nuffield Council on Bioethics, and experts on responsible innovation and on brain stimulation joined us. Overall, the response to our recommendations was positive: all participants agreed that some regulatory action should be taken. There was a general consensus that this regulation should protect consumers but not curtail their freedom to use devices, that manufacturers should not be over-burdened by unnecessary regulatory requirements, and that innovation should not be stifled. Continue reading