Many important discussions in practical ethics necessarily involve a degree of speculation about technology: the identification and analysis of ethical, social and legal issues is most usefully done in advance, to make sure that ethically-informed policy decisions do not lag behind technological development. Correspondingly, a move towards so-called ‘anticipatory ethics’ is often lauded as commendably vigilant, and to a certain extent this is justified. But, obviously, there are limits to how much ethicists – and even scientists, engineers and other innovators – can know about the actual characteristics of a freshly emerging or potential technology – precisely what mechanisms it will employ, what benefits it will confer and what risks it will pose, amongst other things. Quite simply, the less known about the technology, the more speculation has to occur.
In practical ethics discussions, we often find phrases such as ‘In the future there could be a technology that…’ or ‘We can imagine an extension of this technology so that…’, and ethical analysis is then carried out in relation to such prognoses. Sometimes these discussions are conducted with a slight discomfort at the extent to which features of the technological examples are imagined or extrapolated beyond current development – discomfort relating to the ability of ethicists to predict correctly the precise way technology will develop, and corresponding reservation about the value of any conclusions that emerge from discussion of, as yet, merely hypothetical innovation. A degree of hesitation in relation to very far-reaching speculation indeed seems justified.
But, at the same time, philosophers have been imagining and analyzing the implications of technologies for centuries. The ways in which human beings could modify themselves and the ways in which they interact with the world are of perennial conceptual and normative interest. From brains in vats, through teleportation devices, to computer chips in brains, fantastical technologies are part of the bread and butter of philosophical inquiry. So, when ethicists speculate about future technologies, are they in fact engaging in good old-fashioned thought experiments? The answer is: sometimes yes and sometimes no, and it is important to be clear about what we’re doing.
An example in the contemporary ethical literature of technology-as-thought-experiment can be found in Savulescu and Persson’s work on moral enhancement and freedom. Although Savulescu and Persson take seriously the general possibility that some biomedical technologies could be used to effect meaningful moral enhancement, their discussion of the God Machine – a bioquantum computer that, by modifying agents’ intentions, intervenes to prevent very harmful acts – is a strictly philosophical exercise. They do not intend to provide a serious argument for developing such technology, nor is their use of the example suggestive of any optimism that the hypothetical technology could ever become a reality. Rather, discussion of the God Machine serves as a thought experiment to structure conceptual analysis of freedom of action, and its value.
However, whilst it should be obvious that Savulescu and Persson don’t envisage the God Machine as a plausible emerging technology, there are other debates in practical ethics where it is less clear whether the future technology under discussion is supposed to serve merely as a device for illuminating conceptual points, or whether it is considered a technology about which recommendations can presently be made. For example, in some discussions about cognitive enhancement drugs, ethicists intend to draw conclusions about the policies that, say, schools, medical councils, or aviation authorities should adopt. However, some of these discussions are motivated by hypothetical scenarios, within which features of the technology are caricatured or speculated about. Something like the following is not untypical:
Cognitive Enhancer X: Imagine that there is a pill that is completely safe, and effective at remediating fatigue. Should the surgeon/pilot/bus driver be required to take it? If so, should we subsequently expect more of him/her in terms of performance? Etc…
What conclusions can really be drawn from the sort of analysis that flows from considering the hypothetical cognitive enhancer? I suggest that interesting theoretical conclusions can indeed be drawn but, from these sort of speculative examples alone, the scope for practical recommendations is limited.
Reflection on the example of the surgeon and Cognitive Enhancer X can tell us interesting things about, say, the scope of professional obligations and the moral requirements that attend having another agent in your care. However, such speculation alone does not tell us what the Royal College of Surgeons’ Good Surgical Practice should say today about cognitive enhancers. This is because we cannot make concrete recommendations until – and unless – enough is known about the envisaged technology. Even when an existing drug like modafinil is flagged up as a tentative example, recommendations cannot be made unless all the particularities of modafinil are attended to. Modafinil, for example, has side effects, and it does not affect every individual in the same way. Even in individuals for whom improvements are demonstrated, not every cognitive capacity will be enhanced and trade offs might occur. Wakefulness drugs necessarily have knock-on consequences for agents’ subsequent sleep needs and wellbeing. There are many other relevant features.
So, the conclusions of theoretical discussion about Cognitive Enhancer X (and its implications for responsibility) clearly cannot be translated, without significant qualification, to recommendations about modafinil in surgical practice today. Sufficient attention to scientific research on modafinil could allow ethicists to come to conclusions about this specific drug, although this moves us quite far from the abstract discussion of the relationship between capacities and obligations. Beyond this particular debate, anticipatory ethical analysis is sometimes conducted even in the absence of existing precursor technologies. This, of course, can be valuable, but the more speculative the technology the more the analysis should be located firmly in the theoretical rather than the practical.
This being said, it should be noted that the potential hazard of slipping between highly speculative or idealized discussion and practical recommendation is not only problematic in one direction. We should also resist the move of concluding, on the basis of technologies available today, that reflection on future technology is redundant or that the status quo will endure. Even if taking modafinil is not something we should require of surgeons today, this fact alone does not indicate that something more like Cognitive Enhancer X will never be developed.
I suggest that when speculating about technology, explicit consideration of the type of work one is trying to do will help maximize its usefulness. The central contention is very simple: abstract examples of idealized technologies can only generate theoretical knowledge, not practical recommendations. The more one has to speculate about the features of a technology, the more one’s inquiry should be restricted to the theoretical realm. The following provides a summary of my suggestions when speculating about new technology in ethics:
- Be clear about what the aim is (and reasonably can be): conceptual analysis and/or concrete recommendations.
- When intending to use future technology to examine conceptual claims, a clear, argument-derived methodology should be employed. Indeed, in such cases, attempting to guess at plausible features of a future technology might actually detract from the utility of the thought experiment. Thought experiments, unlike most technologies, need to be ‘pure’; the factors that are thought to be normatively relevant must be isolated and carefully varied to enable assessment of whether these factors indeed play the hypothesized role. Conceptual analysis thus requires a degree of idealization (or at least simplification).
- If making practical recommendations pertaining to an emerging technology, uncertainty and speculation relating to features of the technology must be acknowledged. Practical recommendations need to take into account all the complexity and particularities of the technology under discussion (to the extent possible). The more that is unknown, the more qualified any recommendations must be. Whilst it is good to engage in ethical analysis in advance, when very little is know about a future technology, efforts might be more usefully restricted to the conceptual/theoretical level.
- The mistake of slipping from theoretical analysis of hypothetical technologies to making practical recommendations should be avoided. Of course, both modes of analysis are important, but they require different methodologies; high-level conceptual analysis of technology will not render practical conclusions.
We obviously need to think ahead and not be overtaken by technology. The rate of development demands that sometimes we have to make best guesses. It can be helpful, however, to make sure we’re clear with ourselves and our audiences whether we are in fact able to make practical recommendations yet, or whether our analyses are, for the time being, more appropriately limited to the conceptual and theoretical.
References
Savulescu, J., & Persson, I. (2012). Moral enhancement, freedom and the God machine. The Monist, 95(3), 399.
I like this post.
That said, I might even go further. Folks talked productively about atom bombs in 1939, but by then, we knew about uranium fission. We basically just needed to do engineering, not discover new science. I think when we don’t grasp the science behind an imagined technology, we should not try to draw practical conclusions.
Comments are closed.