Skip to content

Experience and self-experimentation in ethics

The Guardian has an article about student use of cognition enhancers. It is pretty similar to many others and I have already discussed my views on the academic use of cognition enhancers ad nauseam on this blog. However, it brings up something I have been thinking about since I was last in the media about enhancers. It started when I stated in an article in The Times that I had used modafinil; that strongly raised media interest, and I ended up in various radio interviews, The Daily Mail and the Oxford student newspaper (they of course asked the hardest questions). In the past I have always appeared as the expert on the function and ethics of enhancers but now I was also a subject, and that really appeals to journalism. At the same time I started thinking about the ethics of ethicists using a substance they are studying the ethics of using.

What about self-experimentation in ethics?

Ethicists are unlikely to try things they think are wrong to do, but they might be willing to try things they are uncertain about the rightness of trying. This might make them occasionally act wrongly but if the result of their investigation is a better understanding of what is right to do (and dissemination of this finding), then that wrong may be outweighed by the benefit (at least in others). The real problem would be if they did wrong acts that they did not learn from. The self-experimentation issue hence seems to come down to the
value of experience and risk of bias in philosophical work *.

Self-experimentation has a long tradition in science, mainly in medicine and psychology. The experiment ranged from infecting oneself with bacteria to experiments with psychoactive drugs to drinking heavy water. The results were sometimes deadly. In some cases these experiments were done to get the best possible observations, in other cases for ethical reasons (the researchers often had the best chance of informed consent) and sometimes probably out of sheer foolhardiness. There are also criticisms of the practice, for example that certain experiments are too harmful for anybody to be subjected to them, the small number of participants may make the science weak and, the researchers may be overconfident in their objectivity. Some self-experimentation may not even be about answering a question but to try something and "see what happens". A good example in this context is the neural interface of professor Kevin Warwick, where one of the main points of the exercise (at least as I read it) was to demonstrate to the public and non-medical academics that this kind of technology exists and isn't just some kind of science fiction dream.

In the case of taking cognition enhancers there could be a conflict of interest in pharmacologists investigating them. By taking the drugs they make the personal estimate that they are safe and efficacious enough. This is a conclusion that may be correct for a person to make based on their evidence and life values but might bias the hoped for scientific objectivity (the drug might not be safe or efficacious enough for everybody). This may be weaker than a commercial or career interest in the drug, but it is still potentially biasing.

Is the same true for the enhancer using ethicists? They would be attempting to apply ethical theory to real-life situations (assuming they are doing practical ethics; meta-ethicists and normative ethicists might be more akin to chemists and general pharmacologists, respectively). The issue would be whether the enhancer use would bias how the theories were applied, i.e. make certain conclusions more likely regardless of the facts of the situation or the theory.**

What is the role of the ethicist in the practice of applied ethics? One might view applied ethics as just applying existing philosophical methods, and in this case there shouldn't be any individual judgement, just correct application. The problem with this view is that the choice of philosophical method may not be grounded in any fundamental theory, the application of a method often involves significant individual skill and idiosyncratic style, and often the application is far less well-defined than applying a mathematical method to a problem: different people do reach different conclusions even with the same method. All this suggests that the experience and influence of the ethicist on the conclusion will be important.

Some ethicists are parents, and it is not strange that they write about
the ethics of parenthood or child-related issues. Their experience with
being parents give them relevant information about the issues, and it
also likely motivates them. This might also be biasing: perhaps the
love parents feel for their children (and being parents) will bias their judgement about
various courses of action that a non-parent ethicist might be more
neutral about. Yet few people would say that ethicist parents would be unsuitable for writing parenthood ethics, ethicists with war experiences unsuitable for writing about military ethics or female ethicists unsuitable for writing feminist ethics. The biasing factors are outweighed by the value of having first-hand experience.

The reason for this is that the end product is (hopefully) a cogent argument why a certain course of action is good, or why principle X will not be effective in situation Y. It does not matter how this argument came about, only whether it is valid, consistent, fits with shared experience and allows a deeper understanding of the issue. It is a bit like my previously made argument that it does not matter whether a theorem is proved under the influence of an enhancer or not, the value of the theorem is independent on how it came about. Certainly the characteristics of a person matters to whether they can prove the theorem or make a particular argument, but the argument once made has a life of its own.

(This is also why we accept arguments from philosophers not living particularly admirable lives or following their stated moral tenets – the fact that Rosseau gave away his own children to a foundling hospital doesn't mean his ideas on education are not worth considering)

This is why biases of ethicists may be easier to bear than biases of (say) pharmacologists. Ethical arguments are more transparent for examination than pharmacological claims: the text on the page is all there is to the argument, while in a scientific paper there is a large amount of hidden experimental methods, data, data processing and interpretation methods. Worse, the cost of making scientific experiments makes it hard for a critic or would-be replicator to just redo them to check them, while a philosophical argument costs far less to redo. The biases of the ethicist are possible to deal with by writing a counter-paper showing a weakness in the argument or a better method, while funding biases in pharmacology can only be suspected and must be counteracted by running expensive, carefully controlled studied.

Maybe my own views on the potential benefits of cognition enhancing drugs are biased, in a feedback loop where I think I have good reasons for using them and this practice biases my views on the rightness of using them. But that would only make my arguments slighly less effective in advancing our ethical understanding (since I would not be making the most truth-seeking arguments). My arguments would still be subject to scrutiny and debate from people who disagree with my conclusions or reasoning. They can still be judged on whether they give a fair hearing to opposing arguments, show all the steps of reasoning and make impartial conclusions.

My own experience with enhancers does not seem to predispose me towards extreme enthusiasm or disdain. I know that they do not make me a genius or solve the problem of lack of time management, yet I have seen how they make certain tricky problems easier, especially in stressful or noisy situations. I actually think I have more reasonable expectations for what enhancers could do than many people who have discussed their ethics or social impact. Similarly being personally conscious about the risks of side-effects or other harms brings such issues into "near mode": in fact, this might be a constructive biasing effect since much of ethics is done in "far mode" and near mode analysis could uncover relevant connections that would otherwise be largely overlooked. But they would still need to be examined by outsiders with different views.

Footnotes

[*] Above I have ignored the issue of whether enhancers actually enhance ethical thinking (or, doing pharmacology research). I think they could potentially do it, but we have no strong evidence for it. They can certainly increase academic output, and maybe help us think about concepts or relations that are slightly more complex than we could normally process. From a research integrity standpoint this might improve intellectual competence and perhaps imagination and originality. But that remains to be seen.

[**] Perhaps the most worrisome bias is professional. It is in the interest of bioethicists to find bioethical problems. If cognitive enhancement was seen as no issue at all, we enhancement ethicists would have to look for other things to work on. If it is seen as a major ethical, political and practical issue on the other hand our views will be in high demand and we will no doubt get to go to nice conferences or sit in well-paid quangos. This kind of funding bias does distort the overall shape of literature regardless of individual bias, exaggerating harms and benefits (the same problem of course applies to journalism).

Share on