Skip to content

Mind wars: do we want the enhanced military?

Jonathan Moreno presented a special lecture the 18th about “Mind Wars”, the military applications of neurotechnology. Here are some of my notes and comments inspired by this stimulating lecture.

Military bioethics

Moreno made an interesting initial point: the history of bioethics can’t be understood without understanding the history of military medical research. Many of the practices and principles like informed consent were first developed in the context of US military research. One simple reason was that it was one of the first truly large scale collective research environments at a time where research was still largely done by individuals. Another aspect was the need for standardization to enable collaboration and consistent judgement across the military organization. As research organisations grew in the postwar eras the early bioethical approaches also expanded.

That is not to say the military demonstrated fine ethical sensibilities. There are plentiful examples of experiments done on soldiers and civilians (informed or not) that clearly breach any reasonable morality – from hallucinogen doping experiments over radiation exposure to questionable treatment programs for infectious diseases. A formal ethical code and careful box-ticking is no guarantee of sanity or actual ethical behavior.

Snake-oil

Military or intelligence goals are also no protection against snake-oil. Many of the most amusing episodes in neurotechnology deal with attempts to harness psychic powers, often motivated by little more than fictional inspirations amplified by the Cold War fear that even if success looks unlikely it would be unacceptable to let the Other Side gain it. Here the nontrivial interface between science and the military often seem to have caused trouble: a military or intelligence officer considering something to be worth investigating may not have the scientific knowledge to judge its feasibility, but as soon as the possibility of investigation is raised, scientists are likely to at least accept the money to do it, reinforcing the appearance that there might be something there (the mere existence of a program can trigger other programs).

Some researchers may also be motivated to keep the money flowing even if they know the stated goal is hopeless: did not the UK radar program spring from a foolish investigation into death rays? So the loop of motivated cognition, selective reporting of evidence, institutional inertia and the building of bureaucratic (and academic) fiefdoms gets going. Indeed, systems of highly questionable fitness for purpose – such as certain deception detection devices – are not just developed but deployed in the field. Where they are defended by practitioners since they at least have some effect – admitting that the placebo technology is just placebo would undermine morale.

Big Neuroscience

But Moreno pointed out that something important may be changing. We are now entering an era of Big Neuroscience: the convergence of computing, biotechnology and neuroscience is producing plenty of exciting and potentially powerful technologies. Cognition enhancer drugs, brain imaging and interfacing, genetic testing, big data applied to training psychology – all of these have significant potential. But perhaps most importantly, the flow may now come from the biomedical world into the national security world rather than the reverse. That does not resolve the snake oil problem, nor the risk that the view that national security is ethically supreme will lead to further unethical or wildly misguided programs. Quite the reverse.

There is an irony here in that the military in developed nations potentially could be the most utilitarian organizations imaginable: aiming at optimal, well-specified outcomes, with an internal structure well suited for consistent testing and observation, able to bring significant societal resources to bear, and actually in most cases having a fairly well developed ethical view of what is acceptable or not. But in practice real military organizations are far away from this due to sociological, cultural and political reasons.

Do we want an enhanced military?

It is not obvious whether we should even wish for such a scientific, rational army, let alone an enhanced army. If it is just nice people having it, we may have a reason. But one cannot imagine an invention – or organizational structure – that only just people can use. More powerful neurotechnologies will enable military and intelligence forces to be better at what they do: deterrence, gathering information, causing harm to opponents, acting as a muscle for the national interest. Given that among past human-caused disasters wars and democides dominate the top we may not wish these abilities to grow. Military supporters would argue that the number of wars between great powers have declined as their effective strength and coordination ability increased. However, the risk from rare tail events (accidents do happen) is not reassuring: a conflict between very powerful military forces may be an existential threat for our species.

The supporter would argue that neurotechnologies may be a better category of military development than weapons of mass destruction: these technologies aim at better (more agile, more accurate, more sustainable, more creative) coordination and execution rather than better force projection and lethality. But to an innocent bystander a clash of smart armies is only preferable to a clash of dumb armies if that intelligence is directed by good values (such as not incurring unnecessary collateral damage). The neurotechnology to watch may hence be the ability to enhance moral decisionmaking, both among the rank-and-file and the leadership.

But mere enhancement is not enough: one can follow a twisted set of values brilliantly and conscientiously. The guiding aims for the morally enhanced military need to be good for it to be a good thing. In the end the key is to keep states dominated by good ethical values and in control of their military/intelligence systems rather than the reverse. The history of neurosecurity shows that we have far to go in this direction – but it is urgent that we pursue it.

Share on