Skip to content

Cross Post: Brainpower: Use it or Lose it?

  • by

This is the first in a series of blogposts by the members of the Expanding Autonomy project, funded by the Arts and Humanities Research Council

Written By: J Adam Carter, COGITO, University of Glasgow

E-mail: adam.carter@glasgow.ac.uk

 

What are things going to be like in 100 years? Here’s one possible future, described in Michael P. Lynch’s The Internet of Us. He invites us to imagine:

smartphones are miniaturized and hooked directly into a person’s brain. With a single mental command, those who have this technology – let’s call it neuromedia – can access information on any subject ….

That sounds pretty good. Just think how quickly you could gain information you need, and how easy and intellectually streamlined the process would be. But here is the rest of the story:

Now imagine that an environmental disaster strikes our invented society after several generations have enjoyed the fruits of neuromedia. The electronic communication grid that allows neuromedia to function is destroyed. Suddenly no one can access the shared cloud of information by thought alone. . . . [F]or the inhabitants of this society, losing neuromedia is an immensely unsettling experience; it’s like a normally sighted person going blind. They have lost a way of accessing information on which they’ve come to rely.

This is worrying. But what is the right way for us to view the moral of this tale?

Perhaps it’s this: muscles (cognitive or otherwise) atrophy without exercise, so use them or lose them.

We don’t even need science fiction to support the observation that cognitive bioatrophy accompanies increased offloading and high-tech dependence. To use just one real-life example, a study of the navigational skills of London taxi drivers showed that hippocampi (responsible for spatial reasoning) became smaller over time in the brains of those taxi drivers who relied entirely on GPS to navigate, compared to those who relied primarily on their biologically endowed spatial reasoning (i.e., mental maps) in combination with paper maps. As Roger McKinlay, former president of the Royal Institute of Navigation puts it, “Mountain-rescue teams are tired of searching for people with drained smartphone batteries, no sense of direction and no paper map.”

It might seem like these cases (of sci-fi and real-world cognitive bioatrophy) represent an objectionable (even if not wholly blameworthy) kind of ‘learned helplessness’, where individuals are adapting to convenient new technologies in a way that incentivises bad biocranial epistemic hygiene that, in turn, leaves people unable to rely on themselves in a jam. In short: we might be tempted to reason: (i) use it (biocognitive skills) or lose it; (ii) you’d better not lose it (you don’t want to be helpless, after all!): so (iii) use it.

 

***

Photograph of a Comptometer, model ST super totalizer
Comptometer: model ST Super Totalizer

But I think we should be careful before signing on to this kind of narrative.

Consider now a very different case of bioatrophy. In Ancient Rome, when culture was valued but books were scarce, the ability to memorise long poems – entire books of them – was a socially valuable skillset for people to cultivate, especially so for graeculi (memory workers) who developed exceptional biological memory capacity in order to read poems on demand for payment (or in some cases, as servants).

There’s been little use for the distinctive cognitive skills of graeculi given the advent of the printing press, and now the Internet. There’s just no need for memorising entire books or having the kind of developed biomemory capacity that would be needed to so anymore. There’s cheaper ways of getting accessing and sharing that information.

In a similar vein, for a period of time between the invention of the comptometer (a manual calculator) in the late 1880s until around the early 1970s, the ability to perform mathematical operations mediated by a comptometer – in a quick and seamless way – was a highly valuable skill to have, even a prerequisite for many office jobs. Each key of a comptometer adds or subtracts its value to the ‘accumulator’ the instant the button is pushed, and in such a way that the comptometer user’s success at assisting with fast mathematical operations requires a synced-up, hybrid performance consisting in certain fast calculations done in the human user’s biological brain along with some done by the device. Now that we have digital calculators, no one frankly needs comptometers anymore (no more than we need the slide rule), and many in the present generation will have never heard of one. Whatever brainpower was useful for skills at using a comptometer seamlessly at high speed will have accordingly atrophied over the past 50 years.

Cases of cognitive bioatrophy following the irrelevance of graeculi and comptometer operators don’t look like they’ve resulted in any net intellectual loss. It’s at least not obvious that – even from a purely epistemic point of view, where we want to get knowledge – we should be working to regain whatever distinctive biological brainpower was suited to those particular tasks, as opposed to instead apportioning our finite brainpower elsewhere.

If that’s right, then we can see how the moral of Lynch’s neuromedia vignette (along with modern-day GPS cognitive bioatrophy) is more complicated than it might have first appeared. In short, whatever we might find worrying about a society of neuromedia addicts who would become helpless upon a possible power failure, it can’t be explained with reference to general ‘use it or lose it’-style reasoning.

 

***

 

Let’s consider now a very different vignette, effectively the inverse narrative of the initial neuromedia thought experiment:

BRAINDRAIN: Suppose, in the future, an unfortunate bioweapon wipes out not the electric grid, but rather, it compromises (through a highly contagious virus) human biomemory, and in such a way that memory traces (including newly formed memory traces) tend to dissolve or become corrupted after just a few days. Those who had relied primarily on biological memory have difficulty structuring their lives, and locating old information when they need it. Those with impeccable digital calendars and diaries, and well-calibrated strategies for operating seamlessly with them, are much less affected; old information as well as new information is stored safely digitally and is easily retrievable.

What, if any, moral could we draw from risks illustrated by BRAINDRAIN? Does appreciation of the risk illustrated here give us any reason to on balance avoid biomemory and just offload to our tech instead? Again, we need to be careful not to reason too quickly. We’ve already seen that that strategy could potentially leave us equally helpless also, once (a la Lynch) the electric grid fails.

 

***

 

Let’s scope out. Notice that, from the perspective in which we care about the kinds of knowledge we need to structure our lives, it’s not clear that cognitive bioatrophy as the result of tech use – in and of itself – is any more significant, as something to be avoided, than (as per BRAINDRAIN) being an ineffective or infrequent cognitive scaffolder is to be avoided. Relatedly: facts about cognitive bioatrophy don’t – in and of themselves – really tell us much about whether we would (by avoiding such cognitive bioatrophy) be thereby avoiding scenarios where we’ll end up cognitively helpless, any more than would our choice to use or forbear from cognitive scaffolding, in itself, tell us much about this.

These points motivate what I take to be a revisionist picture of the kind of epistemic autonomy that’s worth valuing, and what risks to that autonomy look like. On this revisionist picture, the kinds of risks the obtaining of which would leave us cognitively helpless (and so disempowered to gain knowledge) are best understood as multifarious when it comes to which aspects of our belief forming mechanisms (and scaffolding to those mechanisms) they might target; they include risks to innate biocognition as well as to our capacities to offload and outsource in contexts where knowledge acquisition requires such epistemic dependence.

If we tacitly associate epistemic autonomy with the capacity to do well epistemically without epistemic dependence and the tech that will often enable it, we risk losing sight of what will be, on balance, our best ways to cognitively empower ourselves.

If this thinking is on the right track, then epistemic autonomy of the kind the kind worth valuing will accordingly be less bioprejudiced than traditional theorising about autonomy (intellectual or otherwise) would suggest; it will involve navigating epistemic risk in a way that embraces (rather than opposes) epistemic dependence. The matter of just how much epistemic dependence is too much is a fair question; but – if the foregoing is right – it’s  going to be best asked alongside the question of how much biocognition (at the expense of offloading) is too much. For the kind of epistemic autonomy that seems to really matter for us getting knowledge and avoiding ignorance, it looks like there’s relevance – perhaps even equal relevance – to both questions.

 

Acknowledgement: The work was supported by the Arts and Humanities Research Council [AH/W005077/1].

Share on