Written by William Isdale,
of The University of Queensland
As many readers will be aware, this year will mark the conclusion of the Millennium Development Goals. For some of these goals, expectations have been exceeded; for instance, the goal of halving global poverty (defined as living on less then US$1.25 a day) was achieved back in 2010.
There are good grounds for believing that extreme poverty can be almost entirely eradicated within our lifetimes. But, for now, a lot of work remains to be done; the average life expectancy among the ‘bottom billion’ remains a miserable fifty years, and the most recent UNICEF estimate of poverty-related deaths among children is 6.3 million each year. Continue reading
The Court of Protection is due to review very soon the case of a teenager with a relapsed brain tumour. The young man had been diagnosed with the tumour as a baby, but it has apparently come back and spread so that according to his neurosurgeon he has been “going in and out of a coma”. In February, the court heard from medical specialists that he was expected to die within two weeks, and authorized doctors to withhold chemotherapy, neurosurgery and other invasive treatments, against the wishes of the boy’s parents.
However, three months after that ruling, the teenager is still alive, and so the court has been asked to review its decision. What should we make of this case? Were doctors and the court wrong?
There appears to be lot of disagreement in moral philosophy. Whether these many apparent disagreements are deep and irresolvable, I believe there is at least one thing it is reasonable to agree on right now, whatever general moral view we adopt: that it is very important to reduce the risk that all intelligent beings on this planet are eliminated by an enormous catastrophe, such as a nuclear war. How we might in fact try to reduce such existential risks is discussed elsewhere. My claim here is only that we – whether we’re consequentialists, deontologists, or virtue ethicists – should all agree that we should try to save the world.
A majority in the House of Commons has provided David Cameron with the freedom to do over the next five years some of the things that he’s found difficult over the last five. One of the things that is set for reform is the law on inheritance tax, with the Tory manifesto having pledged to
take the family home out of tax by increasing the effective Inheritance Tax threshold for married couples and civil partners to £1 million – so you can keep more of your income and pass it on to future generations. (p 3)
(UKIP upped the ante on this, promising to get rid of inheritance tax altogether.)
How big an impact the Conservative policy would make is hard to tell: most people don’t pay inheritance tax anyway, and so raising the threshold would affect only a portion of the residuum that would pay it. But, still: we might ask whether such a policy is just. For sure, there will be some people for whom it’s attractive – archetypally, the sort of person who bought a property in a then down-at-heel part of London or Manchester a generation ago who finds that it is now something of a golden egg. But attractiveness in a policy will only take us so far. To answer the justice question, we need to look at the principles behind it. And once we do that, I’m not so sure that the policy is just. Indeed, it’s not clear that there’d be anything unjust about having a much higher rate of inheritance tax.
The reason for the claim that reducing the inheritance tax burden is unjust is straightforward: it means that those who were fortunate with their parents get a helping hand not available to everyone. The children of dentists will, at some point, receive a capital benefit that would not be matched by the children of dustmen. Since this difference is arbitrary – noone deserves rich or poor, thrifty or feckless parents – there is a case to be made that the just society would seek to smooth it out to as great a degree as possible. At least on paper, we might be tempted to think that a 100% inheritance tax would be a way to do this: it would ensure that noone benefitted at all from ancestral good fortune. In practice, there’d doubtless be all kinds of workaround that’d make such a high rate unenforceable – but the case might stand in principle.
Is the moral case, then, that easily made? Continue reading
Former Auschwitz SS officer Oskar Gröning is currently being tried as an accessory to murder for his role as an administrator in the extermination camp, and the trial has stirred up a lot of debate. One strand of the debate addresses the question whether Gröning was complicit in the extermination of prisoners, and whether he was culpable for this complicity. (Roger Crisp wrote a fascinating post on this a couple of weeks back.) But another strand – and the strand that I want to look at here – has addressed the question whether former Nazi war criminals should be tried and punished for deeds in their distant past. Eva Mozes Kor, an Auschwitz survivor and witness in Gröning’s trial has claimed that he shouldn’t be tried, though he should use his knowledge to help fight holocaust denial.
Let’s suppose that Gröning was indeed a culpable accomplice to murder. Should he then be punished? More generally, should serious crimes from decades go be punished? My intuition is that they should, but reflecting on why I have found it is not straightforward to defend this view. Continue reading
Many important discussions in practical ethics necessarily involve a degree of speculation about technology: the identification and analysis of ethical, social and legal issues is most usefully done in advance, to make sure that ethically-informed policy decisions do not lag behind technological development. Correspondingly, a move towards so-called ‘anticipatory ethics’ is often lauded as commendably vigilant, and to a certain extent this is justified. But, obviously, there are limits to how much ethicists – and even scientists, engineers and other innovators – can know about the actual characteristics of a freshly emerging or potential technology – precisely what mechanisms it will employ, what benefits it will confer and what risks it will pose, amongst other things. Quite simply, the less known about the technology, the more speculation has to occur.
In practical ethics discussions, we often find phrases such as ‘In the future there could be a technology that…’ or ‘We can imagine an extension of this technology so that…’, and ethical analysis is then carried out in relation to such prognoses. Sometimes these discussions are conducted with a slight discomfort at the extent to which features of the technological examples are imagined or extrapolated beyond current development – discomfort relating to the ability of ethicists to predict correctly the precise way technology will develop, and corresponding reservation about the value of any conclusions that emerge from discussion of, as yet, merely hypothetical innovation. A degree of hesitation in relation to very far-reaching speculation indeed seems justified. Continue reading
by Nigel Warburton, @philosophybites
On May 3rd two men opened fire on a security guard near the ‘Muhhamad Art Exhibit & Contest’ an event in Garland, Texas, that advertised a $10,000 prize for the best cartoon drawing of the prophet. The assailants were shot dead by the police. Pamela Geller, the organiser of the event, is a political blogger, who, enflamed by 9/11 has mounted a well-funded campaign against what she sees as an Islamization of America and the ‘mosque-ing’ of the workplace (see this Washington Post article). The immediate catalyst for her draw Muhammad stunt was the Charlie Hebdo attacks in Paris. She was within her US First Amendment rights to organise this event, and explicitly defended it on free speech grounds. She told the Washington Post: ‘We decided to have a cartoon contest to show we would not kowtow to violent intimidation and allow the freedom of speech to be overwhelmed by thugs and bullies.’
This justification is very similar to that behind the decision of Charlie Hebdo’s surviving editors to publish the Muhammad cartoon on the front cover of the first issue of the magazine after the murders. It also echoes a more nuanced version of this stance given by Timothy Garton Ash in an article in ‘Defying the Assassin’s Veto’ in the New York Review of Books, in which he argued for reproducing a wide range of Charlie Hebdo’s covers, not just those which satirised Muhammad. Where violence is threatened, and there is a risk of self-censorship through fear, one of the best ways of standing up to the assassin’s veto is to produce more of the very thing that offends the would-be assassin in reaction and solidarity.
In a recent issue of the Journal of Medical Ethics, Thomas Ploug and Søren Holm point out that scientific communities can sometimes get pretty polarized. This happens when two different groups of researchers consistently argue for (more or less) opposite positions on some hot-button empirical issue.
The examples they give are: debates over the merits of breast cancer screening and the advisability of prescribing statins to people at low risk of heart disease. Other examples come easily to mind. The one that pops into my head is the debate over the health benefits vs. risks of male circumcision—which I’ve covered in some detail here, here, here, here, and here.
When I first starting writing about this issue, I was pretty “polarized” myself. But I’ve tried to step back over the years to look for middle ground. Once you realize that your arguments are getting too one-sided, it’s hard to go on producing them without making some adjustments. At least, it is without losing credibility — and no small measure of self-respect.
This point will become important later on.
Nota bene! According to Ploug and Holm, disagreement is not the same as polarization. Instead, polarization only happens when researchers:
(1) Begin to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data, and
(2) Begin to discount arguments and data that would normally be taken as important in a scientific debate.
But wait a minute. Isn’t there something peculiar about point number (1)?
On the one hand, it’s framed in terms of self-identification, so: “I see myself as a proponent of a particular position that needs to be strongly defended.” Ok, that much makes sense. But then it makes it sound like this position-defending has to go “beyond what is supported by the data.”
But who would self-identify as someone who makes inadequately supported arguments?
We might chalk this up to ambiguous phrasing. Maybe the authors mean that (in order for polarization to be diagnosed) researchers have to self-identify as “proponents of a particular position,” while the part about “beyond the data” is what an objective third-party would say about the researchers (even if that’s not what they would say about themselves). It’s hard to know for sure.
But the issue of self-identification is going to come up again in a minute, because I think it poses a big problem for Ploug and Holm’s ultimate proposal for how to combat polarization. To see why, though, I have to say a little bit more about what their overall suggestion is in the first place.
Could the fact that someone is more scroogelike – less willing to sacrifice for the sake of doing good – entail that morality is less demanding for her? The answer to this question has important implications for a host of issues in practical ethics, including issues surrounding adoption, procreation, charity, consumer choices, and self-defense.