One of the great pleasures of studying human behaviour is to see that what we find in our experiments, what we theorize in our papers and textbooks – as unlikely and counterintuitive it appears to be – actually predicts what happens in so-called real life. Take, for instance, the current build-up of a stock-market bubble in the UK, happening even more dramatically in the US. In the UK, the FTSE 100 is on its way to surpass the record set during the high times of the dotcom bubble and already surpassed the levels reached during the 2008 financial bubble; in the US the Dow Jones has already reached new record highs. Despite having recently experienced the devastating consequences of a stock market bubble bursting, banks and investors return a few years later to the same hyperbolic forecasts and predictions, and start to build up another bubble. It is as if the past did not exist. Compare this behaviour with the following anecdote, which most business school students probably know.
On the 22nd of October 1707, more than 1400 British sailors died when a British naval fleet sank in stormy weather off the Isles of Scilly. The disaster was later attributed to failings in navigation and sailors’ difficulty in determining their location at sea. This was a perennial problem at the time, and had persisted despite intense scientific research. Seven years later, the UK government passed the Longitude Act, offering 20,000 pounds (more than 2 million pounds in today’s money) to anyone who could develop a method for reliably determining longitude at sea. The longitude prize was eventually won by John Harrison, a self-educated Lincolnshire clockmaker.
Yesterday, 300 years after the original Longitude act, the UK Technology Strategy Board launched a £10 million pound prize competition, a new ‘Longitude prize’. The money will be awarded to a scientist or group of scientists who come up with a solution to one of a set of major global challenges – inadequate food/clean water supply for everyone, antibiotic resistance, spinal cord injury, dementia, the large carbon impact of air-flight.
The new Longitude prize is the latest in a series of innovation inducement competitions over time. These competitions have offered monetary rewards for solving problems as diverse as the development of butter substitutes, the first trans-Atlantic air flight, reusable aircraft for space flight, or an alternative fertilizer to bird poo. One novel feature of the 2014 Longitude prize is that it is seeking public input into the specific challenge to be targeted. Public voting will decide which of the six global challenges above are to be the focus of the prize.
But are innovation prizes an effective or appropriate way to solve major global scientific challenges? Continue reading
Personal Genome Project UK email disaster: If you can’t guarantee privacy, at least try to ensure trust
It’s not often that you can write on a topic in ethics whilst rolling around laughing, so I shall take this rare opportunity to make a few comments on the ludicrous breach of privacy that occurred last night when the Personal Genome Project messed up something as simple as an email list.
I’d expressed an interest in taking part in this project which aims to sequence the genomes of hundreds of thousands of people and make these available, together with trait information, to researchers. There are clear potential worries about privacy here, as there is a potential to identify individuals from such a rich source of information. Nonetheless, I was excited to take part. After all, many of the people I know and love the most would not be alive today were it not for advances in medical science which have helped to treat diseases such as cancer and type 1 diabetes. In the past, many have risked life and limb for medical science. What was the potential of a little breach of privacy to worry about? Besides which, there has been considerable attention to ethics, privacy, and security around this project. There’s a whole ethics crew. Presumably they only hire the crème de la crème of data and IT experts. Surely these guys could be trusted to use our information wisely, and to do all they could to prevent irresponsible use? Continue reading
Imagine that you and your partner are having a baby in hospital. Tragically something goes wrong unexpectedly during birth and the baby is born blue. He urgently needs resuscitation if there is to be a chance of preventing permanent severe brain damage. How long would it be reasonable for doctors to wait before starting resuscitation? 15 minutes? 5 minutes? 1 minute?
What would be a reasonable excuse for delaying the commencement of resuscitation? They wanted to get a cup of coffee? The mother wanted to hold the baby first? The mother had catastrophic bleeding and this needed urgent attention?
If it were my baby, I would not want any delay in starting resuscitation. And there would be no justification for delaying resuscitation except some more serious, more urgent problem for another patient, such as the mother.
Yet when people choose homebirth, delay is precisely what they choose. It is simply not possible to start advanced resuscitation in the home within minutes. And their reason is not typically some relevant competing health concern that necessitates delivery at home.
Choosing home birth is choosing delay if some serious problem arises which requires immediate resuscitation.
For a long time, objectivity and impartiality were perceived to be noble and uncontroversial goals for journalists. Objectivity is straightforwardly appealing – we want information that is accurate and undistorted by reporters’ personal politics. However, there is of late some pushback against that view (often called ‘The View from Nowhere’, which has apparently become such common parlance in the industry that the Wikipedia entry focuses on the term’s use in journalism rather than Nagel’s book whose title inspired the movement). The idea, roughly, is that personal bias is unavoidable among journalists (and indeed the public in general). It is hypocritical to claim to offer impartial reporting because that impartiality can never be achieved; instead, reporters should simply embrace their normative perspectives and be up front about it and its influence on their work. But this move is a serious mistake, one that will subvert the central internal purpose of journalism and only serve to promote greater ignorance about the world. Continue reading
In a recent article in the New York Times, Harvard economics professor Gregory Mankiw points out that economic policy advice always relies on political-philosophical standpoints and, inspired by medical ethics, suggests that economists that give policy advice should apply the No harm principle rather than promote policy based on uncertain predictions and political-philosophical convictions. By applying his interpretation of this principle, he claims that economists should not endorse either the Affordable care act, or higher minimum wage because these are in fact policies that cause harm.
It is refreshing with an economist who recognises that there is no such thing as purely scientific, value-free economic policy advice, and it is interesting to consider whether ethical principles can be introduced to deal with biases inherent to policy advice and with uncertainties innate to economic predictions. However, Mankiw’s proposal is as biased as the policy advice he addresses, and his proposed version of the No harm principle is at best a poor re-articulation of his own ideological convictions. Continue reading
There is a lively debate in the philosophy of psychiatry over what makes a condition a disease. The debate is particularly heated with regard to addiction: it is a moral failing, a brain disease or something else altogether? People who hold that addiction is a brain disease often claim that their view is more humane, because it removes the stigma from a condition that is not the sufferer’s fault. Unfortunately matters are not so clear cut: there is some evidence that the disease model actually increases stigma, or at least makes mental illness seem more a fixed part of the person’s identity. Continue reading
Back in 2010, I blogged about Craig Venter’s creation of the first synthetic organism, Synthia, a bacteria.
Now, in 2014, the next step has been made by a team at John Hopkins University, the use of synthetic biology in yeast, which, whilst still a simple organism, has a similar cell structure to humans (and other more complex organisms): a nuclei, chromosomes and organelles. The engineered yeast has been reproduced to over 100 generations, passing on its new DNA.
The pace is breathtaking. Moore’s law describes a phenomenon in computing, where computer capacity (so far) doubles every two years. Kurzweil uses Moore’s law to predict the ‘singularity’: a state where humans no longer control, or even comprehend, the progress that technology continues to make.
It’s difficult to measure scientific progress in the same way as computer power, but it’s clear that leaps in progress are now measured in years, not decades. Yet still we wait until technology is upon us before we act.
Things I’ve learned (so far) about how to do practical ethics
I had the opportunity, a few months back, to look through some old poems I’d written in high school. Some, I thought, were pretty good. Others I remembered thinking were good when I wrote them, but now they seem embarrassingly bad: pseudo-profound, full of clichés, marked by empty rhetoric instead of meaningful content. I’ve had a similar experience today with my collection of articles here at the Practical Ethics blog. And Oh, the things I have learned!
Here are just a few of the lessons that have altered my thinking, or otherwise informed my views about “doing” practical ethics — particularly in a public-engagement context — since my very first blog post appeared in 2011: