Skip to content

How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Written by David Lyreskog

Four images depicting ‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney.
‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney


A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.

In a new paper, I, together with Dr. Hazem Zohny, Prof. Julian Savulescu, and Prof. Ilina Singh, show how these new technologies may reshape fundamental components of widely accepted concepts pertaining to moral behaviour. The paper, titled ‘Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds’, was just published in Neuroethics, and is freely available as an Open Access article through the link above.

In the paper, we argue that the received views on how we (should) ascribe responsibility to individuals and collectives map poorly onto networks of these ‘Collective Minds’. The intimately collective nature of direct multiple-brain interfaces, for instance, where human minds can collaborate on and complete complex tasks without necessarily being in the same room – or even on the same continent! –  seem to suggest a collectivist moral framework to ascribe agency and responsibility. However, the technologies we are seeing in R&D do not necessitate the meeting of criteria we normally would turn to for ascription of such frameworks; they do not, for instance, seem to rely on that participants have shared goals, know what the goals of other participants are, or even know whether they are collaborating with another person or a computer. 

In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that relevant frameworks for now focus on other aspects of significance to ethical analysis, such as (a) technical specifications of the Collective Mind, (b) the domain in which the technology is deployed, and (c) the reversibility of its physical and mental impacts. However, in the future, we will arguably need to find other ways to assess agency constellations and responsibility distribution, lest we abandon these concepts completely in this domain.

Share on

4 Comment on this post

  1. I, along with more well-known people, have cautioned against over-zealous complexity. It is a Brave New World we inhabit. Futurists, past and present, are watching because they see notions and ideas becoming reality—or, at least something beyond the possibilities they wrote about. Another blog I regularly review shared something about a petition concerning AI. There is a lot of such buzz happening now along with concern over academic freedom. I really appreciate the diversity of interests, preferences and motives appearing here and among other posts. I am not a futurist in the time-honored tradition. However, I try to keep eyes, ears and mind open. There was a book , read years ago, that sparked interest in complexity. If we are to remain at home in the Universe, we need to try harder, and so on. Unharnessed complexity is at once opportunity and peril.

  2. It is interesting but not surprising that discussions are taking place in this area.
    Not surprising because social group responsibility is a constant element of regulative regimes (which are frequently based and referenced from ethical/moral standards). More about that below.
    Interesting because of the dangers involved in moving responsibility away from individuals (thereby removing individual moral agency) and placing it solely upon the group as computing/thinking technological construct of whatever complexity. In those typea of circumstance ethical measures appear to take over and logical incremental decisions can soon distort any originally implicated moral value out of all recognition. (When reading the article the focused dangers of group think – in the management sense – come to mind). Yet more problematic would be shifting the emphasis for moral responsibility from individual onto a constructed group itself, whilst seemingly providing an attractively constructive answer to the difficulty presented, will no doubt lead to more manipulation of the ‘desires and intentions’ of individuals as a means of managing a particular group outcome probably creating less individual ownership/consideration/feeling of participation in any implicated ethical/moral dimension. Indeed the arguments about a technological construct of minds being distanced from any type of social group type term appears to nullify its own argument unless only applied to robotic appliances. Yet if the arguments are considered as strictly focused upon only the ideas or decisions being processed the paradoxical relationships becomes highly contrasted and distanced.
    Because of the logical basis to a great deal of ethical and technology discourse a tendency appears to attribute something other than logic to what are often only complex logical outcomes. This all too human trait makes me think this discussion as a whole is only attempting to address what has historically been seen as the movement of ethics/morality into a formal regulative regime (converting the informal into the formal) whilst at the same time denying the described complexity by largely ignoring it in the singular outcome of group responsibility. In that sense formalising a movement of personal responsibility into social responsibility (in the sense that a group responsibility becomes the complete responsibility) would require many more links back to the individual level, if human morality were to survive. Clearly there will be drivers which deny the value of human morality and seek to replace it in a similar way that often at certain stages in human intellectual development no real value for existence itself is perceived in the human race.

    1. Ian,

      The idea of group responsability by regulation is without doubt so far a failure.

      To see why spend some time looking at how “companies” and “corporations” shun all morals and ethics, and use “collective responsability” as a weapon against regulatory authorities.

      Thus we get massive harms that if individuals could be blaimed would result in multiple life time sentences (we know this from the worlds largest Ponzie scheme so far). But as “no individual” can be held either to blaim or blaimless no action against the individuals can happen. Thus a fine is the most punishment the “legal person” of the company gets. In most places this is a joke because it is effectively “tax deductible” in nearly all places and it is only the slower investors that loose out.

  3. There is no disagreement on that here, you merely have to look at disasters like Bophal and similar to see the tragic consequences of corporate failures due to these types of issue. Ponzie schemes begin to pale somewhat.
    However, the reason my response was murky in the way it was presented was because at times some social groups do structure themselves in the ways being described in the article for valid and useful purposes. The difficulties appear to arise where ethical oversight or management, structure themselves in the same or a similar way believing that a singular methodology proven successful for a particular objective will be successful in all other circumstances.
    Disregarding the size, objectives and environment of a social group when considering structural strategy seems to nearly always eventually develop into a structural weakness as many do fail to consider how to retain the scope of the applied focus when looking at these things as they seek to develop. Yet creating a situation enabling people to recognize the limitations of a particular paradigmatic answer without becoming too defensive becomes difficult in many successful or popular organisational contexts.
    For instance a social group of at societal level focused upon a rigid narrow set of ethical/moral parameters will end up in a dysfunctional and weakened state because it fails to meet the aspirations of much of its population, which leads it into a defensiveness of the parameters rather than a consideration of the population and its changing environment.
    Yet when a smaller social group comes/is brought together for a focused reason within defined objectives which retained some form of external or broader based internal oversight to maintain/alter the structure as required for the objective(s), whilst that could be ethically wrong for any wider society or particular morality, they may function entirely ethically and correctly within the structure they create. So we both stated, the link to individuals who ostensibly remain responsible does need to remain sufficiently strong to assure the retention of that external link as change occurs. The Green movement appear to have been brought partially to that type of realization because of their focus, and utilise it in their save the planet campaigns by focusing on different (moral/ethical rather than regulative regime) mechanisms but many social media so far have not yet fully recognized/are still struggling with those issues.
    There is no simple or straightforward answer when focusing solely upon any particular social group structure within the whole of the environment. Yes collective responsibility is leveraged against regulatory authorities. That leveraging can be looked at as correct from within some social groups, and badly wrong from within others. To perceive that look to things like the ongoing difficulties sections of the media or political parties (amongst many other social groups) are continuing to trip over with privacy and social divides in most regimes as the pace of change continues unabated. Yet one would think those particular social groups (or whatever term you would wish to apply) would remain acutely aware of wider environmental issues/changes, because of the roles they fulfil. (You will have to ignored the deliberate deception, manipulation attempting to measure, or create particular outcomes, when looking at those groups.)

Comments are closed.