Skip to content

Profiting from Misery: Is There Something Different About Healthcare Data?

  • by

By Dr Jeremy Gauntlett-Gilbert – student on the MSt Practical Ethics programme

The advent of Machine Learning and Artificial Intelligence has opened up new possibilities for health research. Specifically, these techniques could be let loose on ‘big data’, such as the collective data of healthcare organisations (including the NHS), and would likely reveal new insights about health and disease.

However, this requires the release of health data, in the UK largely held in the public-service NHS, to researchers. These researchers may include universities, and also private, for-profit companies. This can be controversial, or even scandalous – for example, in 2019 it was shown that Ascension (a large US healthcare provider) had transferred over 50 million medical records to Google, in secret,  with no attempt to remove names or identifiers. In the wake of such scandals, and other big tech misbehaviour such as the Cambridge Analytica case, members of the public are increasingly reluctant about the transfer of their health records to profit-making businesses. British respondents to a YouGov poll were markedly happier to transfer their data to ‘academic or medical research institutions’ (50.3%) than to a ‘tech company for commercial purposes (3.5%). This pattern holds, internationally, to different degrees. There seems to be no blanket refusal for the use of health data for research, but a serious scepticism about mixing health data with the profit motive.

Ethicists have tried to understand this reluctance, and even to generate potential rules or solutions that might improve public trust. They have explored how mixing the NHS’s ‘public good’ goals with economic profit may dilute trust. It seems clear, also, that people worry about anonymised health data being re-identified. So, authors have generated lists of mitigating procedures, such as strong data governance, clear consenting rules, patient involvement, risk minimisation, and compensation. Authors are aware that strong rules are needed for such ‘sensitive data’ – though it is rarely explained how health data remain ‘sensitive’ after good anonymisation. The debate is often framed as being about Public Trust, and how to strengthen this in the face of the profit motive.

These are all legitimate concerns – but what if we are missing something? We could shift our focus to the nature of the data itself, rather than the rules for using it. What if there is something fundamentally different about healthcare data, that the public intuits, and that might have normative significance? Are there some things that simply shouldn’t be bought and sold?

Let us take an imaginary walk through the entrance and waiting rooms of a District General Hospital. It is immediately obvious that no-one is there for a happy purpose. Nearly all will have needles inserted into their bodies. Some will be irradiated, some will have tubes pushed down their throats or into their colons. Some will be rendered utterly unconscious and wholly dependent on strangers.

Thus, health data is only ever produced in a context of vulnerability, fear, pain, dependency and mortality. Vaccinations and annual health checks may be minor counterexamples to this rule. When we reflect that health data is a narrative and distillation of these experiences, we may get our first intuitions about its difference.

Let us compare this health data to other commercially valuable data, such as about your (1) online shopping record, or about (2) your location. (We may object to ‘selling’ all of these, but the interest is in the contrasts.) We can first imagine a well-lived life; it would likely generate a good deal of location and shopping data, but ideally it would generate zero health data. The ideal amount of health data for any human (again perhaps barring vaccinations) is – none. Shopping and location may record both happy and unhappy aspects of a life – health data, only the aspects that involve discomfort and vulnerability.

It is also clear that health data is created, whether we like it or not (a possible exception is cosmetic surgery). If you ask a health professional not to make a note about your consultation, they will likely have to decline – there is no “do not consent” option. Also, although we ostensibly give consent to healthcare procedures, this is often not a free choice between similar options. Often, even if we consent to a procedure, there is little true choice (“the only way to know if it is bowel cancer is for you to have a colonoscopy”).

So, when we take these data, which are a distillation of some of the worst moments in our lives, produced whether we like it or not – is it ok to sell this to make a tech company richer? The concept of ‘commodification’ is helpful here – that is, taking something that was previously never ‘on the market’ (i.e. medical notes) and turning it into something that can be bought and sold. This concept has been central in thinking about issues such as organ donation for profit. Possibly, the public are intuitively rebelling against the picture that health data are fundamentally similar to online shopping data, and that it can be sold freely so long as there are reassurances about good data governance. Even the act of describing a person’s medical notes as ‘health data’ already implies commodification.

This may help to explain issues of anonymisation. Perhaps reassurances about anonymisation, and low risk of re-identification or data breach, will result in a rapid change of attitude; the public will then be ‘just fine’ about people making millions from their medical records. Or, is it possible that people think that some things should be used for the public good, perhaps, but only rarely bought or sold (anonymous or not)?

Imagine that a huge tech company running profitable dating sites was hopeful of gaining the transcripts of couple therapy sessions. The company was particularly looking for the most agonising sessions where relationships were finally breaking down in the most painfully intimate way – this data would be particularly helpful in training their algorithm. In my intuition, it feels deeply uncomfortable to imagine these data being used for profit – imagine a shareholder buying a yacht on the proceeds – even if anonymisation was guaranteed. It may be that just such an intuition is driving the ‘acceptability of data sharing’ statistics cited at the beginning.

These perspectives may help to explain a component of public reluctance. However, do they also have ethical significance? We have talented ethicists examining issues of trust and governance – we may also benefit from examination of the qualitative differences between different types of data, and the ethical significance of where and how health data are generated.

For example, we may need to explore the ethics of profiting from unchosen moments of vulnerability. The context of data generation may also be normatively important – including the fact that healthcare settings usually claim a respect for privacy and dignity to vulnerable and exposed people. Healthcare consultations take place in an enhanced context of privacy that is not present when driving to a local beauty spot (location data) or buying a pair of jeans (shopping data). It may be possible to extend the ‘couple therapy’ example above to find other analogies, and disanalogies, to money-making from people’s medical notes. Are there connections with other domains where the public, or ethicists, have resisted commodification? The core normative issues, and public perception issues, could be significantly clarified by such analysis.

Share on

Join the conversation

Your email address will not be published. Required fields are marked *


Notify me of followup comments via e-mail. You can also subscribe without commenting.