As the US and other nations gear up for war in Syria, the alleged use of chemical weapons by the Assad regime against civilians has received great, perhaps inordinate attention. A little over a year ago, US President Barack Obama called the use of chemical weapons a “red line”, though was vague about what would happen if that line were crossed. And while there were previous allegations of chemical weapons attacks, the most recent accusations concerning an attack in a Damascus suburb that killed hundreds seem to have been taken more seriously and will likely be used as a Causus Belli for air strikes against Assad’s forces in Syria. Yet, some have argued that this focus on chemical weapons use is rather inconsistent. Dominic Tierney at the Atlantic sarcastically comments, “Blowing your people up with high explosives is allowable, as is shooting them, or torturing them. But woe betide the Syrian regime if it even thinks about using chemical weapons!” And Paul Whitefield at the LA Times inquires, “Why is it worse for children to be killed by a chemical weapon than blown apart by an artillery shell?” These writers have a point. But, while it may not be entirely consistent, I will argue that the greater concern over the use of chemical weapons compared with conventional weapons is justified. Continue reading
New York City contemplates using aerial drones for surveillance purposes, while North Korea buys thousands of cameras to spy on its impoverished population. Britain has so many cameras they cease being newsworthy. The stories multiply – it is trivial to note we are moving towards a surveillance society.
In an earlier post, I suggested surrendering on surveillance might be the least bad option – of all likely civil liberty encroachments, this seemed the less damaging and hardest to resist. But that’s an overly defensive way of phrasing it – if ubiquitous surveillance and lack of privacy are the trends of the future, we shouldn’t just begrudgingly accept them, but demand that society gets the most possible out of them. In this post, I’m not going to suggest how to achieve enlightened surveillance (a 360 degree surveillance would be a small start, for instance), but just outline some of the positive good we could get from it. We all know the negatives; but what good could come from corporations, governments and neighbours being able to peer continually into your bedroom (and efficiently process that data)? In the ideal case, how could we make it work for us? Continue reading
Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…
By Brian Earp
In this ‘hour’ of danger: Civil liberties and the eternal threat of terror
The U.S. government is legally justified in killing its own citizens overseas if they are involved in plotting terror attacks against America, Attorney General Eric Holder said Monday.
“In this hour of danger, we simply cannot afford to wait until deadly plans are carried out, and we will not,” he said in remarks prepared for a speech at Northwestern University’s law school in Chicago.
Pay attention to Mr. Holder’s choice of words here. This hour of danger? Excuse me: an “hour” is a bounded stretch of time – and not very long. But terrorism is a threat with no border – it has existed always, and will continue indefinitely. The “war on terror” cannot be won: you can kill a terrorist, sure, but you cannot eliminate a tactic. So let us not talk about an “hour.” This sort of speech is insidious. We all know that an hour takes sixty minutes and then it’s finished. But terrorism will present a “danger” forever.
Professor Paul Keim, who chairs the US National Science Advisory Board for Biosecurity, recently recommended the censoring of research that described the mutations which led to the transformation of the H5N1 bird-flu virus into a form that can be transmitted between humans through droplets in breath (in ferrets, the number of mutations required is frighteningly small – five). His reason is simple: the research would be a recipe book for bioterrorists.
Keim thinks, however, that such censorship will only delay the inevitable. The information will come out sooner or later, but at least governments might by then have developed and prepared sufficient stocks of vaccine and set in place other emergency measures to deal with a global pandemic.
This is not quite closing the stable door after the horse is bolted. It’s more like closing the farm gate, in the knowledge that eventually the horse will jump the gate and escape.
But this raises the question of why the stable door wasn’t bolted in the first place. In an article in Nature, the leader of one of the teams has said that the research was necessary to show that those experts who doubt the human transmissibility of H5N1 are wrong. But given that there is controversy here, governments should of course be doing what they have been doing: treating the possibility as a serious risk. In response to the charge that the research is dangerous, this same research leader’s response is that there is already a threat of mutation in nature. But threats don’t cancel one another, and nature is not revealing its secrets to bioterrorists. The researchers claim that their research was necessary for the development of a vaccine. Keim’s view is that this is quite implausible, since the drugs the scientists were using against their virus were the same ones used against others. If he’s right, a natural conclusion to draw is that the scientists should never have done the research in the first place. And, having done it, they should have kept quiet about its details and destroyed the virus. They might indeed have informed the media of their overall result, or some carefully restricted set of other researchers of the details of their research. But then of course they wouldn’t have been able to publish those details in top scientific journals.
It was probably hard for the US National Science Advisory Board for Biosecurity (NSABB) to avoid getting plenty of coal in its Christmas stockings this year, sent from various parties who felt NSABB were either stifling academic freedom or not doing enough to protect humanity. So much for good intentions.
The background is the potentially risky experiments on demonstrating the pandemic potential of bird flu: NSABB urged that the resulting papers not include “the methodological and other details that could enable replication of the experiments by those who would seek to do harm”. But it can merely advice, and is fairly rarely called upon to review potentially risky papers. Do we need something with more teeth, or will free and open research protect us better?
Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?
As always, we sentient beings on earth are at risk of being wiped out by some global catastrophe. Some of the risks – diseases or meteorites – are old; others – nuclear weapons or global warming – are more recent. They are discussed very well in Nick Bostrom and Milan Cirkovic’s edited collection Global Catastrophic Risks:
In one sense, a catastrophe is just major systematic change. It needn’t be bad. But of course many people believe that the ending of sentient life on this planet would be a catastrophe in the evaluative sense. It would be very bad for most of the sentient beings living at the time of the catastrophe, and bad in some more impersonal sense since it would prevent many potential sentient beings from becoming actual.
Clearly the ending of sentient life isn’t the worst outcome imaginable. That would involve the existence of sentient beings, in great agony. But the question remains whether this kind of catastrophe would be worse than its not happening, with things continuing much as they are.
It’s at least arguable that it would not be worse. Most would accept that it could be good for some individuals – perhaps those with only a short time of intense agony left. But they would also think that the overall suffering in the world is counterbalanced by the good things in the lives of sentient beings, considered as a whole.
This seems very plausible as a claim about the lives of some such beings. But some individuals have lives of an extremely low quality, consisting sometimes of nothing much more than great agony over a fairly protracted period. How are we to weigh the value of all these different lives against one another?
In his An Analysis of Knowledge and Valuation (1946), the American philosopher C.I. Lewis suggested that we might attempt such comparisons by imagining that we ourselves have to live the lives of all those concerned, in series. It might seem that such a comparison relies on some controversial theory of personal identity. But it need not. Imagine that by some means your own life could be extended hugely, and that you would then be plugged into some machine that would ‘play back’ into your consciousness all the experiences of all sentient beings until there were no more left.
Of course, many of these experiences would be wonderful. But many of them would be very bad indeed. It’s not clear to me that this stream of experiences would overall be better for me than no experiences at all, since the amount of suffering would be so great that perhaps no amount of good experience could counterbalance it. If this is the right view, then a global catastrophe might be something to be welcomed, at least from the impartial or moral point of view.
Should we encourage or avoid large scale environmental manipulation, for example in order to reduce climate change?
Measures such as carbon dioxide capture or ocean iron fertilisation have the potential to mitigate global warming, but what ethical issues are raised by these technologies? How should we take into account the potential risks of such measures, and how should they be weighed against the risks of inaction?
by Julian Savulescu
With his new paper Craig Venter is creaking open the most profound door in humanity’s history, potentially peeking into it’s destiny. He is not merely copying life artificially as Wilmut did or modifying it radically by genetic engineering. He is going towards the role of a god: creating artificial life that could never have existed naturally. Creating life from the ground up using basic building blocks.
At the moment it is basic bacteria just capable of replicating. This is a step towards something much more controversial: creation of living beings with capacities and natures that could never have naturally evolved.
The potential is in the far future, but real and significant: dealing with pollution, new energy sources, new forms of communication. But the risks are also unparalleled. We need new standards of safety evaluation for this kind of radical research and protections from military or terrorist misuse and abuse. These could be used in the future to make the most powerful bioweapons imaginable. The challenge is to eat the apple without choking on the worm.
Other posts in PracticalEthicsNews on synthetic biology