Skip to content

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

These developments highlight the intention to develop neural-interfaces to link computers with the brain, directly. Whether they will succeed remains to be seen. But BCIs are clearly going to be around for a while longer. With the kind of financial backing from tech companies, and high profile offered by figures like Musk, they are unlikely to be a flash in the pan.

These devices do not detect thoughts directly but record and process brain activity.  In doing this, they generate a lot of brain data. From the large amount of data collected, just a subset may of direct relevance for the operation of the devices they will control. This could be thought of as a kind of neural data exhaust. The status of this data raises interesting regulatory questions.

It seems quite clear that this trove of data is something that tech giants will be very interested in using. This will represent a data asset for various activities continuous with the interests they have already. Such interests include the profiling human behavior through datafication.

We can easily imagine some dystopian idea, wherein a company – let’s call them Schmoogle – create a game and measure how its players react to stimuli. This could supply direct measurement of brain activity relative to known stimuli as the players react in navigating their virtual worlds. In so doing, the players furnish a brain signal database.

A social media company too, let’s say Schmacebook, using the same kind of approach, might want to see how we react to every post much more directly than via thumbs-up and down. Brain data recording, by wizard hat or other means, could give a neurophysiological insight to the posts we see from granny, CNN, or some political party.

The type of mental states a person entertains, such as specific preferencesand emotional reactionsto all sorts of stimuli, could be tested in this way. A specific brainwave, the so called P-300 wave may reveal whether a stimulus is new or familiar to the person (its use for forensic purposes has been investigated for many years). Preferences or dispositions might be inferred, without users having anything special to do. All that’s needed is that they are using their computer via the BCI. They may not even be aware of it. These reactions are also not under the conscious control of the person so, in a sense, they tap into the unconscious. However, we hasten to note that fears about reading out thoughts or the content of mental states are premature.

This data is of course of great interest to many, including the tech companies whose power is grounded in technologies and data.  This kind of information would be of undeniable value to them. And there are many ways in which this might be problematic. Brain data might be easily instrumentalised in a variety of ways, even more in the context of increasing algorithmic sophistication and normalization of Big Data.

What to do? Normative side

The guiding ethical and legal idea that stands in some contrast to the collection of brain data is privacy, and here in the terms of mental privacy. The notion has just begun to receive attention by scholars, both in ethics and the law. A robust concept of mental privacy – and its limits – will be key for an apt regulation of brain data. Here, we wish to draw attention to an instrument, familiar to very many in the EU as it has transformed surfing the net into a continuous clicking on consent buttons, the General Data Protection Regulation(GDPR). How does the GDPR relate to the issues raised above in terms of the recording of brain data?

Among its provisions, the GDPR regards individuals as ‘data subjects’, and asserts each data subject’s rights to any and all personal data from which they might be identified. This is a large category. It means that companies that retain your contact details, for instance, must exercise care in sharing, storing, deleting that data. Different types of data attract different types of protection, depending on sensitivity. Health data sees higher levels of protection than some other types, as from this data very sensitive information may be derived about the data subject. If brain data is gained through medical devices, e.g. BCI in motor or speech rehabilitation, it regularly qualifies as health data. However, if it stems from consumer neurotech and recordings of the type envisioned by Facebook, Kernel, and Neuralink, the data might seem – on the face of it – not to constitute health data.

We wish to suggest that brain data – all such physiological recordings – should be considered as health data and be treated accordingly. Therewith, it would be afforded a high level of protection. The reason for this is the fact that brain data is as sensitive as the other forms of data in the category of special protection. Moreover, it seems to us that measurements of the body and mind were conceived by drafters of the GDPR as an instance of health data, which is a widely understood category. Drafters may not have had the expansion of non-medical recordings of this kind in mind.

As a consequence, the recording of such data, and all further uses, would require consent by the data subject (with the exception of ten special cases enumerated by the GDPR). This would provide a first layer of security.

But it might not be enough. As we all know from the way we deal with our smartphones, data subjects easily consent to having their data recorded and used, at least if they get something useful in return. We pay with data. Very likely, this will be the business model for brain data as well. And for this, the current GDPR does not offer satisfying solutions. Novel regulator frameworks might be needed. However, they should be developed for all sorts of data, including but not limited to brain data. It should be borne in mind also the increasing role for algorithmic processing, so that it is not easy to predict to what use troves of data may be put (data exhaust).

Data recorded for some purpose might result in a cache of collateral data that, once processed by future algorithms, may reveal information in unexpected ways about unexpected dimensions of the data subject. These potentialities, rather than an essentially special nature of brain data (neuroessentialism), ought to attract the highest protections.

Share on

3 Comment on this post

  1. This brings up some interesting legal points on ‘intellectual property’ (and perhaps even ‘identity’).

    Who owns this data?
    Who has the right to broadcast it?

    We might find some precedents in television stations broadcasting over state lines. Can the foreign state pick up that feed and re-broadcast it, thereby creating a real-time mimic?

    How would you tell the difference between the original and the mimic, if they are behaving identically at the same time?

  2. This raises interesting questions about the definition of health data and what other protection would exist under GDPR if brain data was not defined as such.

    GDPR defines biometric data as: “…physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person…” This arguably includes brain data whether processed for health reasons or not and is protected as ‘Special Category’ data in the same way that health data is.

    GDPR also requires the ‘informed and freely given consent’ of a data subject to permit any form of biometric data processing or any profiling of such data. Thus, provision under the GDPR exists to protect data subjects in the face of the growing brain-tech industry.

    The challenge will be whether big tech companies can guarantee ‘informed and freely given consent’ from data subjects when processing brain data. I’m not sure where the Schmoogles and Schmacebooks would begin with clear and easily understood explanations for collecting, profiling, sharing, storing and monetising brain data when they continue to fail the transparency test on relatively simple issues like marketing and cookie consent. Google’s recent €50 million fine from the CNIL is clear evidence of this.

    Ethical development and use of brain data (regardless of purpose) is not only reliant on GDPR but also on a change of attitude in big tech companies on how they treat our privacy. Unless this shift in culture materialises in a more convincing way than it has so far, I imagine more big fines will follow.

Comments are closed.