Personalised weapons of mass destruction: governments and strategic emerging technologies
Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…
If personalised biowarfare done via Internet is not enough to give post-Halloween nightmares, consider the US expansion of “kill lists” into a “disposition matrix” system of people to kill or capture, and available means to do so. As noted by the Washington Post, this is a institutionalization of the practice of secret, targeted killing with very limited (if any) legal oversight. It is pretty obvious how future personalised biowarfare could be slotted into such a system right next to drone strikes, no doubt ably defended by a White House spokesperson as being constitutional and certainly within the presidential remit if it ever came to light.
Continuing to another domain, cyberwarfare is regarded by the US as a casus belli. Yet Obama appears to have ordered cyber-attacks against Iran’s nuclear program. Leaving aside the layers of hype of “cyber-” what is actually discussed is remote, technologically empowered sabotage. It might or might not be possible to use in a widespread society-disrupting fashion, but it certainly can be used against focused targets. It will also have collateral effects, not the least that the exploit tools now become available to the wider community of hackers who can turn them to their own ends.
It would be trivial here to continue in a standard rant about the failings of the US government to uphold various ethical or humanitarian principles, but it would be rather redundant – that can be found anywhere on the Internet. It is also obvious that many other governments are moving in similar directions: the US just happens to be the biggest, most advanced and most scrutinized government.
I think a more interesting angle is how governments and other groups handle the security implications of new and disruptive technologies.
Do they get it?
One interesting criticism of Obama’s decision to promote digital sabotage against Iran is that it might have been based on a faulty understanding of the technology and its consequences. It does not appear that he regarded himself as giving a potential casus belli to Iran, nor that spreading the technology of Stuxnet and Flame into the open would become used by enemies of the US (which, after all, has the most sensitive and expensive infrastructure to lose). He or his advisers likely did not see a big problem because they considered their tool as an ordinary tool or action (however sneaky). Normal tools don’t run away and become part of the threat ecosystem. But software is copyable, and once something is out it will remain out there: you need to protect yourself against it forever.
Similarly for drone technology. Since the US has demonstrated drone technology so well, it is now being copied by everybody. Not to be outdone by the Occupy movement’s occucopter, Hezbolla has launched their drone. Given these results, maybe the demonstration of Boeing’s CHAMP drone equipped to destroy electronics is not so good news for the US. How long before a counterpart is in the hands of groups the US would not want to have it? Again, it is an excellent weapon against high-tech infrastructure and societies dependent on it, just the thing to even the odds in a conflict against the US.
While military forces can be protected against drones or anti-electronic weapons, it is unlikely that this would be feasible for an entire civilian infrastructure. The situation is very similar to the Secret Service defence of the president against bioweapons: they have a single person to protect, so they can focus on him and have a reasonable chance of success. The same mechanisms they use, whatever they are, would be unlikely to protect an entire society. Same thing with computer security: it is certainly possible to protect the president’s computer, but the real threat is the unsecured computers out there, running the backbone of society.
Slowing down the spread of disruptive technologies is hard, as many governments are discovering. One reason is that most of them have positive uses: the Internet is enormously empowering, drones allow us to monitor our environment better, biotechnology will help medicine and the environment, 3D printing will enable massive customization and garage innovation, and numerous toxic and explosive chemicals are essential parts of our industrial infrastructure. Making use and co-opting them is often a better solution than trying to prevent the bad uses, since bad uses can rarely be predicted beforehand. As noted in The Atlantic article, some systems would help both the president and everybody else. Widespread monitoring for new pathogens, transparency and data-sharing to boost response abilities, and constant pursuit of better biodefenses, would make everybody safer. There are many more minds and much more resources out there interested in reducing risk than could ever be mustered by any government. It is just that we do not know if this is enough to counter the Moore’s law of mad science.
Recognizing there is a problem
The common point of the biohacking, drones and cyberwarfare is that they are technologies that fundamentally change the nature of national and personal security. Yet they currently are not handled differently by governments: they are certainly seen as strategic technologies, but that just implies to the decision-makers that We should get them before They get them, not that it would be risky to pursue them at all. It might be impossible to prevent them from eventually being invented and used, but it can be rational for a government like the US one to ask itself whether they want to have a world with these now rather than later.
Maybe the decision-makers are on top of things and do make sensible decisions about what strategic technologies to introduce. But past evidence speaks against it. The Nazi German military did not want computers for code-breaking. The Soviet establishment regarded radar stealth technology irrelevant and allowed Pyotr Ufimtsev to publish his findings in the open literature, where they were used by the US stealth program. The potential of the Internet for changing economics and politics seem to have passed most governments by until the 00′s. Decision-makers still do not seem to have understood the subversive potential of digital currencies.
If, as I think, decision-makers have a hard time getting the full implications of technologies they launch, then it seems that recognizing that there is a problem is the key issue. Yes, the technologies should also be used for good ends, but if you do not understand the consequences of your tool your intentions have a good chance of being swamped by unforeseen consequences. Yes, predicting technology is notoriously hard, especially for open-ended technologies like computers and biotechnology, but that doesn’t mean they are utterly beyond reason and foresight.
At the very least decision-makers should consider whether they ought to be pushing for technologies or practices that are likely to damage their strategic interests. If you are more vulnerable than your competitors to biowarfare, EMP, cyberwarfare or assassination politics it is irrational to promote them: you should attempt to slow their spread and development, and ideally culture technologies or institutions that reduce their impacts.