Guest Post: “Gambling should be fun, not a problem”: why strategies of self-control may be paradoxical.
Written by Melanie Trouessin
University of Lyon
Faced with issues related to gambling and games of chance, the Responsible Gambling program aims to promote moderate behaviour on the part of the player. It is about encouraging risk avoidance and offering self-limiting strategies, both temporal and financial, in order to counteract the player’s tendency to lose self-control. If this strategy rightly promotes individual autonomy, compared with other more paternalist measures, it also implies a particular position on the philosophical question of what is normal and what is pathological: a position of continuum. If we can subscribe in some measures of self-constraint in order to come back to a responsible namely moderate and controlled gambling, it implies there is not a huge gulf or qualitative difference between normal gaming and pathological gambling. Continue reading
The Oxford Martin School recently held a two-day symposium on virtual reality and immersive technologies. The aim was to examine a range of technologies, from online games to telepresence via a robot avatar, to consider the ways in which such technologies might affect our personal lives and our interactions with others.
These sorts of technologies reignite traditional philosophical debates concerning the value of different experiences – could a virtual trip to Rome ever be as valuable (objectively or subjectively) as a real trip to Rome? – and conceptual questions about whether certain virtual activities, say, ‘having a party’ or ‘attending a concert’, can ever really be the activity that the virtual environment is designed to simulate. The prospect of robotic telepresence presents particular ethical challenges pertaining to moral responsibility for action at a distance and ethical norms governing virtual acts.
In what follows, I introduce and discuss the concern that virtual experiences and activities are to some extent deficient in value, especially where this relates to the formation and maintenance of close personal relationships. Continue reading
There is a long overdue crisis of confidence in the biological and medical sciences. It would be nice – though perhaps rather ambitious – to think that it could transmute into a culture of humility.
A recent comment in Nature observes that: ‘An unpublished 2015 survey by the American Society for Cell Biology found that more than two-thirds of respondents had on at least one occasion been unable to reproduce published results. Biomedical researchers from drug companies have reported that one-quarter or fewer of high-profile papers are reproducible.’
Reproducibility of results is one of the girders underpinning conventional science. The Nature article acknowledges this: it is accompanied by a cartoon showing the crumbling edifice of ‘Robust Science.’
As the unwarranted confidence of scientists teeters and falls, what will – and what should – happen to bioethics?
Written by Professor Neil Levy
The recent discovery of what is claimed to be a distinct species of the genus Homo, our genus, raises to three the number of species that may have co-existed with Homo Sapiens. Homo naledi is yet to be dated, but it may be only tens of thousands of years old; if so, it coexisted with modern humans. Homo floresiensis, the so-called ‘hobbit’, seems to have been extant well after sapiens evolved, and there is strong evidence that the Neanderthals coexisted with, probably interbred with, and may have been killed by, our ancestors.
If any of these species had survived into contemporary times, we would be faced with an ethical question which is novel: negotiating our stance toward a species that is not quite human, but too close to be regarded as simply animal (using that word in its common meaning, to refer to non-human animals). More specifically, we would face the problem of how to respond to another deeply cultural being. Naledi seems to have had a culture – so the researchers conclude from the placement of the bones, which they think indicates burial. Perhaps it was language using (floresiensis seems a very good candidate for language using). Yet they might not have been intellectual equals of modern humans (perhaps they were – genetic difference certainly doesn’t entail inferiority – but for the purposes of this post I will assume they weren’t). If they were our contemporaries, would we be obliged to allow them to vote? To have affirmative action for them in universities and in jobs (assuming that some of them, perhaps rare geniuses, could function at a high enough level to take advantage of these opportunities)? Should we treat them as permanent children, appointing guardians for them?
Some philosophers would say that the answer to these questions is quite easy: we should give them equal consideration. Equality of consideration is the kind of equality which philosophers like Peter Singer argue should be extended to chickens and chimps, just as much as human beings. Treating chickens equally in that sense doesn’t entail affirmative action or voting rights for chickens, because chickens don’t have an interest in either. It just requires taking their interests equally into account.
While there are strong reasons for thinking we ought to extend equality of consideration to homo naledi, floresiensis and Neanderthals, that doesn’t tell us the answer to the concrete questions. Insofar as they are self-aware, these people (let’s call them that) have an interest in self-government, and therefore in voting. But (let’s assume) they have a limited capacity to understand the issues on which we vote. As self-aware beings, they might be harmed by being treated as inferior. But there may be good grounds for thinking that they are inferior.
We might offer them limited rights: rights to vote in elections for people who have the special role of looking after their interests. That would entail that they are not as self-governed as we are, since they would be living in a broader society (or in a world, at any rate) in which decisions are taken over which they have less say than we do.
I don’t think there are good answers to these questions. That is, while I am sure there are better and worse answers, I think this would be a true moral dilemma: the best possible response would have big moral costs. There seems to be no way to act that would involve some harms to a properly cultural being that couldn’t be fully autonomous: harms that would arise from its awareness that it was less autonomous and less able to govern its own life than others.
Julian Baggini sees in the discovery of naledi good news for humanity; it shows that in some sense we are not alone. Perhaps, but had they survived, we would face a tragic dilemma. To that extent, we are lucky that they didn’t. Genetic diversity among modern human beings is tiny, with genetic differences between groups swamped by those within them. That ensures that the questions we face about how to treat members of other groups are in one central way easier: they are in every important respect our equals. Our ethics would struggle to settle how to treat a deeply cultural group distinct from us which is in some respects not our equals.
Written by William Isdale
University of Queensland
This year is the 70th anniversary of the atomic bombing of Hiroshima and Nagasaki. Are there any moral lessons we can learn from that historical episode? I think so.
Recently I delivered a talk on radio about this topic. I argue that one key reason to study history is to learn lessons about human nature. The war in the Pacific against Japan can tech us about, (1) our tribal natures, (2) the limits of empathy when we kill from a distance, and (3) the ratchet-up effect of retaliatory violence.
We have a moral obligation to take heed of those lessons, for instance by reining in our more dangerous traits. The existence of nuclear weapons, because of their destructive power, makes the imperative to understand and control our natures all the more significant.
Below is a slightly adapted version of what I said.
This year marks 70 years since the end of World War Two. A conflict that ended with the use of the most destructive weapons ever invented – the atomic bombs, dropped on Hiroshima and Nagasaki.
Has it ever occurred to you to ask, just what is the point of commemorating wars? Do we commemorate them because they are interesting, or are there more important reasons?
If you’ve ever attended a war commemoration ceremony, you’ve probably heard speakers talking about the gratitude that we owe to those who fought to defend our way of life. Or speeches that urge us to reflect on the tragedy of lives lost, and the risks of rushing into conflict. And those are good reasons for remembering wars. But, in my view, they’re not the most important ones.
The Scottish philosopher David Hume once wrote that the principal reason to study history is to discover “the constant and universal principles of human nature”. And in no other area of human life is learning those lessons more important, than when they concern war.
By studying wars we can learn lessons about ourselves. About how we get into them – why we keep fighting them – and what we do to justify extraordinary levels of cruelty and destruction visited on others.
Today I want to uncover three lessons about human nature that are revealed to us by the war in the Pacific against Japan – and particularly, from the nuclear bombing of Hiroshima and Nagasaki.
by Joao Fabiano
Why inequality matters
Philosophers who argue that we should care about inequality often have some variation of a prioritarian view. For them, well-being matters more for those who are worse off, and we should prioritise improving their lives over the lives of others. Several others believe we should care about inequality because it is inherently bad that one person is worse off than another through no fault of her own – some add the requirement both persons should be equally deserving. Either way, few philosophers would argue that we should worsen the better off, or worsen the average, while keeping the worse off just as badly off, only to narrow the inequality gap. Hence, when it comes to economic inequality we should prefer to make the poor better off by making everyone richer instead of making everyone, on average or sum, poorer. Moreover, in most views it is reasonable to care more about inequality at the bottom and less about inequality at the top. We should prefer to reduce inequality by making the worse off richer instead of closing the gap between those who are already better off. I believe a closer inspection at how these equalitarian/prioritarian preferences translate into economic concerns can lead one to reject a few common assumptions.
It is often assumed that the liberal economic model, when compared to strong welfare models, is detrimental to human economic equality. Reducing poverty, equalitarianism and wealth redistribution are, after all, one of the chief principles of the welfare State. The widening of the gap between the top and the bottom is often cited as a concern in liberal States. I wish to argue that out of the various inequality statistics available, if we look at the ones that seem to be more relevant for equalitarian ethics, then strong welfare States fare worse than economically liberal States. For that, I will focus on a comparison between the US and European welfare States’ levels of inequality. Continue reading
Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.
Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.
Can we make and use algorithms more ethically?
Brenda Kelly and Charles Foster
Female Genital Mutilation (‘FGM’) is a term covering various procedures involving partial or total removal of the external female genitalia or other injury to the female genital organs for non-medical reasons (WHO, 2012). It can be associated with immediate and long-term physical and psychological health problems. FGM is prevalent in Africa, Middle East and South East Asia as well as within diaspora communities from these countries
The Government, keenly aware of the political capital in FGM, has come down hard. The Serious Crime Act 2015 makes it mandatory to report to the police cases of FGM in girls under the age of 18. While we have some issues with that requirement, it is at least concordant with the general law of child protection.
What is of more concern is the requirement, introduced by the cowardly device of a Ministerial Direction and after the most cursory consultation (in which the GMC and the RCOG hardly covered themselves in glory), by which healthcare professionals, from October 2015, are legally obliged to submit patient-identifiable information to the Department of Health (‘DOH’) on every female patient with FGM who presents for whatever reason, through the Enhanced Dataset Collection (EDC). The majority of these women will have undergone FGM in their country of origin prior to coming to the UK. Continue reading
Consider the following case. Imagine you inherit a fortune from your parents. With that money, you buy a luxurious house and you pay to get a good education, which later allows you to find a job where you earn a decent salary. Many years later, you find out that your parents made their fortune through a very bad act—say, defrauding someone. You also find out that the scammed person and his family lived an underprivileged life from that moment on.
What do you think you would need to do to fulfill your moral obligations?