Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.
Scientists are people too
In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.
At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
Written by Professor Julian Savulescu and Professor
This is a cross-post of an article which was originally published in The Conversation
Effective altruism is a philosophy and social movement which aims not only to increase charitable donations of time and money (and indeed more broadly to encourage leading a lifestyle which does good in the world), but also encourage the most effective use of these resources, usually by looking for measurable impacts such as lives saved per dollar.
For an effective altruist, the core question is: “Of all the possible ways to make a difference, how can I make the greatest difference?” It might be argued, for example, that charity work isn’t the best use of time; a talented financier may be better off working for a bank, and use their earnings to pay for others to work for charities instead. Continue reading
Every day, for about thirty-five minutes, I sit cross-legged on a cushion with my eyes shut. I regulate my breath, titrating its speed against numbers in my head; I watch my breath surging and trickling in and out of my chest; I feel the air at the point of entry and exit; I export my mind to a point just beyond my nose and pour the breath into that point. When my mind wanders off, I tug it back.
The practice is systematic and arduous. In some ways it is complex: it involves 16 distinct stages. When I am tired, and the errant mind won’t come quietly back on track, I find it helpful to summarise the injunctions to myself as:
- I am here
- This is it
I alternate the emphases: ‘I am here’: ‘I am here’; ‘I am here’; ‘This is it’; ‘This is it’; ‘This is it.’
I note (although not usually, and not ideally, when I’m in the middle of the practice) that each of these connotations presumes something about the existence of an ‘I’. This is less obvious with the second proposition, but clearly there: ‘This’ is something that requires a subject. Continue reading
Scott Alexander has a thoughtful piece about who gets to set the default in disagreements about what is reasonable. He describes a couple therapy session where one member is bored with his sex life and goes kinky clubbing, to the anger of his strongly monogamous partner. Yet both want to stay together at least for the sake of the kids. Assuming the answer is an either-or situation where one has to give up on their demand (likely not the ideal response in an actual couple therapy setting), the issue seems to boil down to who has the unreasonable demand.
It resonated with another article I came across in my news flow today: What It’s Like to Be Chemically Castrated. This article is an interview with a man who wanted to be chemically castrated in order to manage his sex addiction and save his 45-year marriage. Is this an unreasonable intervention?
Marriage is not well served by its defenders. The loudest and best reported of them are often fundamentalist bigots. It’s a shame, for marriage has a lot going for it.
Even if you think that marriage is an anachronistic/bourgeois/theologically contaminated institution, you’ll probably agree that the breakdown of marriages is best avoided. Of course incurably dysfunctional marriages should be ended, but most people aspire to enduring relationships, and the wrench of marital dislocation is emotionally and financially traumatic. If there are children, marriage breakup is painful for the parents and can be enduringly damaging for the children. There are, in short and quite uncontroversially, some significant harms associated with the breakdown of marriages.
How can marriage breakdown – and hence those harms – be avoided? Continue reading
Guest Post: “Gambling should be fun, not a problem”: why strategies of self-control may be paradoxical.
Written by Melanie Trouessin
University of Lyon
Faced with issues related to gambling and games of chance, the Responsible Gambling program aims to promote moderate behaviour on the part of the player. It is about encouraging risk avoidance and offering self-limiting strategies, both temporal and financial, in order to counteract the player’s tendency to lose self-control. If this strategy rightly promotes individual autonomy, compared with other more paternalist measures, it also implies a particular position on the philosophical question of what is normal and what is pathological: a position of continuum. If we can subscribe in some measures of self-constraint in order to come back to a responsible namely moderate and controlled gambling, it implies there is not a huge gulf or qualitative difference between normal gaming and pathological gambling. Continue reading
The Oxford Martin School recently held a two-day symposium on virtual reality and immersive technologies. The aim was to examine a range of technologies, from online games to telepresence via a robot avatar, to consider the ways in which such technologies might affect our personal lives and our interactions with others.
These sorts of technologies reignite traditional philosophical debates concerning the value of different experiences – could a virtual trip to Rome ever be as valuable (objectively or subjectively) as a real trip to Rome? – and conceptual questions about whether certain virtual activities, say, ‘having a party’ or ‘attending a concert’, can ever really be the activity that the virtual environment is designed to simulate. The prospect of robotic telepresence presents particular ethical challenges pertaining to moral responsibility for action at a distance and ethical norms governing virtual acts.
In what follows, I introduce and discuss the concern that virtual experiences and activities are to some extent deficient in value, especially where this relates to the formation and maintenance of close personal relationships. Continue reading
There is a long overdue crisis of confidence in the biological and medical sciences. It would be nice – though perhaps rather ambitious – to think that it could transmute into a culture of humility.
A recent comment in Nature observes that: ‘An unpublished 2015 survey by the American Society for Cell Biology found that more than two-thirds of respondents had on at least one occasion been unable to reproduce published results. Biomedical researchers from drug companies have reported that one-quarter or fewer of high-profile papers are reproducible.’
Reproducibility of results is one of the girders underpinning conventional science. The Nature article acknowledges this: it is accompanied by a cartoon showing the crumbling edifice of ‘Robust Science.’
As the unwarranted confidence of scientists teeters and falls, what will – and what should – happen to bioethics?
Written by Professor Neil Levy
The recent discovery of what is claimed to be a distinct species of the genus Homo, our genus, raises to three the number of species that may have co-existed with Homo Sapiens. Homo naledi is yet to be dated, but it may be only tens of thousands of years old; if so, it coexisted with modern humans. Homo floresiensis, the so-called ‘hobbit’, seems to have been extant well after sapiens evolved, and there is strong evidence that the Neanderthals coexisted with, probably interbred with, and may have been killed by, our ancestors.
If any of these species had survived into contemporary times, we would be faced with an ethical question which is novel: negotiating our stance toward a species that is not quite human, but too close to be regarded as simply animal (using that word in its common meaning, to refer to non-human animals). More specifically, we would face the problem of how to respond to another deeply cultural being. Naledi seems to have had a culture – so the researchers conclude from the placement of the bones, which they think indicates burial. Perhaps it was language using (floresiensis seems a very good candidate for language using). Yet they might not have been intellectual equals of modern humans (perhaps they were – genetic difference certainly doesn’t entail inferiority – but for the purposes of this post I will assume they weren’t). If they were our contemporaries, would we be obliged to allow them to vote? To have affirmative action for them in universities and in jobs (assuming that some of them, perhaps rare geniuses, could function at a high enough level to take advantage of these opportunities)? Should we treat them as permanent children, appointing guardians for them?
Some philosophers would say that the answer to these questions is quite easy: we should give them equal consideration. Equality of consideration is the kind of equality which philosophers like Peter Singer argue should be extended to chickens and chimps, just as much as human beings. Treating chickens equally in that sense doesn’t entail affirmative action or voting rights for chickens, because chickens don’t have an interest in either. It just requires taking their interests equally into account.
While there are strong reasons for thinking we ought to extend equality of consideration to homo naledi, floresiensis and Neanderthals, that doesn’t tell us the answer to the concrete questions. Insofar as they are self-aware, these people (let’s call them that) have an interest in self-government, and therefore in voting. But (let’s assume) they have a limited capacity to understand the issues on which we vote. As self-aware beings, they might be harmed by being treated as inferior. But there may be good grounds for thinking that they are inferior.
We might offer them limited rights: rights to vote in elections for people who have the special role of looking after their interests. That would entail that they are not as self-governed as we are, since they would be living in a broader society (or in a world, at any rate) in which decisions are taken over which they have less say than we do.
I don’t think there are good answers to these questions. That is, while I am sure there are better and worse answers, I think this would be a true moral dilemma: the best possible response would have big moral costs. There seems to be no way to act that would involve some harms to a properly cultural being that couldn’t be fully autonomous: harms that would arise from its awareness that it was less autonomous and less able to govern its own life than others.
Julian Baggini sees in the discovery of naledi good news for humanity; it shows that in some sense we are not alone. Perhaps, but had they survived, we would face a tragic dilemma. To that extent, we are lucky that they didn’t. Genetic diversity among modern human beings is tiny, with genetic differences between groups swamped by those within them. That ensures that the questions we face about how to treat members of other groups are in one central way easier: they are in every important respect our equals. Our ethics would struggle to settle how to treat a deeply cultural group distinct from us which is in some respects not our equals.
Written by William Isdale
University of Queensland
This year is the 70th anniversary of the atomic bombing of Hiroshima and Nagasaki. Are there any moral lessons we can learn from that historical episode? I think so.
Recently I delivered a talk on radio about this topic. I argue that one key reason to study history is to learn lessons about human nature. The war in the Pacific against Japan can tech us about, (1) our tribal natures, (2) the limits of empathy when we kill from a distance, and (3) the ratchet-up effect of retaliatory violence.
We have a moral obligation to take heed of those lessons, for instance by reining in our more dangerous traits. The existence of nuclear weapons, because of their destructive power, makes the imperative to understand and control our natures all the more significant.
Below is a slightly adapted version of what I said.
This year marks 70 years since the end of World War Two. A conflict that ended with the use of the most destructive weapons ever invented – the atomic bombs, dropped on Hiroshima and Nagasaki.
Has it ever occurred to you to ask, just what is the point of commemorating wars? Do we commemorate them because they are interesting, or are there more important reasons?
If you’ve ever attended a war commemoration ceremony, you’ve probably heard speakers talking about the gratitude that we owe to those who fought to defend our way of life. Or speeches that urge us to reflect on the tragedy of lives lost, and the risks of rushing into conflict. And those are good reasons for remembering wars. But, in my view, they’re not the most important ones.
The Scottish philosopher David Hume once wrote that the principal reason to study history is to discover “the constant and universal principles of human nature”. And in no other area of human life is learning those lessons more important, than when they concern war.
By studying wars we can learn lessons about ourselves. About how we get into them – why we keep fighting them – and what we do to justify extraordinary levels of cruelty and destruction visited on others.
Today I want to uncover three lessons about human nature that are revealed to us by the war in the Pacific against Japan – and particularly, from the nuclear bombing of Hiroshima and Nagasaki.