Skip to content

Outsourcing Without Fear?

This is the second in a series of blogposts by the members of the Expanding Autonomy project, funded by the Arts and Humanities Research Council.

by Neil Levy

As Adam Carter emphasises in the first post in this series, offloading cognitive capacities comes at a cost: the more we depend on external scaffolding and supports to perform a certain task, the less we develop the internal capacities to perform that task. The phenomenon is familiar: people probably really are much less able to do mental arithmetic today than in the past, thanks to the introduction of the calculator. We tend to think of new technologies when we worry about what we lose as a consequence of scaffolding, but the concern is ancient. In the Phaedrus, Plato has Socrates approvingly recounting the story of an Egyptian king who worried that the invention of writing “will produce forgetfulness in the souls of those who have learned it, through lack of practice at using their memory.”

Few of us would seriously prefer to return to a preliterate society. Those who worry about the effect of technologies on cognitive capacities do better focus on particular technologies, and their effects, rather than on technology as a whole.

Perhaps the thought experiment from Michael P. Lynch that Carter uses to frame his discussion can point us toward a way of distinguishing those capacities we should preserve from those we can outsource to technology. Lynch asks us to imagine what would happen were we to become even more heavily reliant on networked information systems than we already are, and those systems were suddenly to fail. We might use this thought experiment as a guide: we’d want those capacities we’d need were we to find ourselves in that situation.

What capacities are these? We’re imagining a world that is suddenly and catastrophically transformed: who knows what challenges we’d face? Rather than try to guess what specific capacities we’d need, we might do better by focusing on general-purpose skills: we might, that is, conclude that we ought to preserve those capacities that would prove useful in all or most possible situations.

This is a suggestion that is technology-friendly, in two ways. First, it allows us to outsource cognitive tasks freely, so long as doing so doesn’t lead to the atrophy of general-purpose capacities. I might be able to use my brain implant that automatically translate Sanskrit without concern; reading Sanskrit isn’t a general-purpose capacity.

Second, it’s technology-friendly insofar as it allows us to outsource to technologies even when those technologies perform tasks that are subserved by general-purpose capacities. Since these capacities are, by definition, general-purpose, the fact that they’re not being developed through one task is fully compatible with their development through another. Perhaps my brain implant for performing calculus is okay, because I rely on my brain alone to perform other mathematical operations and thereby keep my maths brain in shape.

 If this is along the right lines, then we needn’t worry about cognitive scaffolding in many cases, because it will allow us to preserve our general-purpose capacities. But cognitive scaffolding that leads the withering of some or all general-purpose capacities should be avoided.

There might, however, be a case for a more pervasive reliance on cognitive scaffolding, even when it led to the atrophy of general-purpose capacities. This case might proceed by first thinking more realistically about Lynch’s thought experiment. Imagine what would in fact happen were our current information networks (rather than the more extensive network Lynch imagines) to fail catastrophically and for a very extended period of time (the film Leave the World Behind imagines this sort of scenario). It would be a disaster: we are reliant on these systems pervasively: not just to perform arithmetic or translate languages, but for the food we eat, for our water and electricity supply, for petrol, for repairs, for our clothing, and all our other needs. Who would be able to survive such a catastrophe unharmed? Subsistence farmers and the tiny number of remaining hunter-gatherer peoples would be relatively well-placed; the rest of us would be in deep, deep trouble. Our general-purpose capacities, be they ever so fit, aren’t enough to cope: we rely on networks to get by and flourish.

We might think that our general-purpose capacities would allow us to acquire the skills of hunter-gatherers or subsistence farming. I think that’s wildly overoptimistic, and a central reason for that is having general-purpose capacities isn’t enough: we need to rely on others and their knowledge to flourish. Subsistence farmers don’t work out how to farm for themselves: they rely on a cognitive scaffolding provided by many other people, most long-dead.

Humans have always relied deeply on one another. They’ve relied on ecological knowledge honed over many generations and many people to survive and flourish. The fate of well-equipped explorers in the nineteenth century illustrates what a difference this sort of knowledge makes. Think of the infamous Franklin expedition, which perished of starvation in an area the local Netsilik people regarded as rich in resources, or of the similar fate of the Burke and Wills expedition, members of which died because they lacked the highly specific knowledge needed to prepare the Nardoo plant for consumption.  The explorers might have had their general-purposes capacities honed to the highest degree, but without the cultural knowledge imparted by other people – and distributed across different individuals – they were doomed. Knowledge accumulated over many people and across time is essential for our survival, and it’s a fantasy to think we’d be better off depending on ourselves alone.

We should worry about our networks and our reliance on them. Plausibly, though, the moral shouldn’t be don’t rely so much on these networks but rather build redundancy into them. Most of us would die were the networks to fail, whether our general-purpose capacities were in good shape or not.

In asking what capacities we should develop and retain, though, we might be asking one of two different questions. We might ask:

  • What capacities are in fact needed for flourishing as a human being?
  • What capacities is it normatively important for beings like us to develop?

We might be tempted to give different answers to these questions. Perhaps human beings are autonomous only when they retain certain capacities, even if these capacities are both redundant for flourishing today, and wouldn’t help us to flourish we were thrown on our own resources. Lynch’s thought experiment brings us to focus on the first question. If the answer to the second question is different from the answer to the first, we will need a different kind of thought experiment to show it.

Share on

4 Comment on this post

  1. Hmmmm… Outsourcing without fear? That notion, from a standpoint of economics, seems naive to me. Those entities electing to outsource, are seeking improvement in economic advantage, while expecting adequate performance from the *outsource*. Contracts and bilateral agreements may resemble Swiss cheese, here. There are so many examples of this, as to render the question of fear moot, if not ludicrous.

    No, acquisition of economic advantage overrides moral or ethical concerns. Notice I do not mention words like obligation or responsibility. In a world of contextual reality (my term, I think), everything runs, counterpoint, to what we once thought, how we once lived. Outsourcing is about economic advantage only, seems to me. Who loses in the fray, down the line, does not matter. This is not nihilism. It is the level of reality, at which we live.

  2. Each human’s invention always brings some externalities and risk impacts.
    But internet is specific and quite new from this point of view.
    Internet started to live on its own and we (inventors) are not able to control it or even cancel it.

  3. You are right not to waste too much time on Lynch’s thought experiment because it frames the apocalyptic event that could befall humankind when we, as the co-founder of Google Larry Page put it, “have [a brain] implant, where if you think about a fact it will just tell you the answer” (whatever that means) and this ubiquitous implant system undergoes a massive outage. Putting that aside, I’m still not sure why you want to be “technology-friendly” by restricting brain implants to just a few tasks.
    Even if we assume, ex hypothesis, that the technology functions as Page et al ‘believe’ it will and that it is possible, in accordance with Elon Musk’s stated aim, to merge humans with AI, these brain-implants raise numerous issues in the epistemology, the philosophies of mind, language, psychology, science/technology, logic/mathematics, and ethics.
    If we just touch upon the area you mention, mathematics, we immediately encounter problems if we use and rely upon machines that are “processing” mathematics for the mind.(1) I’m not sure how exactly your calculus implant could work (no one has given a satisfactory explanation of how the “mechanism” could work), but the notion that calculus could effectively be mechanised within the mind would have its ramifications across the whole of mathematics including calculus. Human intuition and ingenuity are essential in maths/logic and, as Turing showed, cannot be substituted or replicated by machines.(2) They are required to understand calculus as well as the rest of maths/logic, and this understanding provides the mind with invaluable experience of this type of understanding that can be used across the whole of maths/logic and, as you describe it, general-purpose capacities.
    There are, of course, mathematical operations that are better performed by machines and indeed can only be processed by machines in any reasonable time. But these operations, which can include some “proofs”, should be clearly identified and where possible reduced to the point where they are surveyable/checkable by humans.(3)
    My concern is that it makes very little difference whether either of us is correct in our assessment of calculus implants. It is the AI zealots like Page and Musk who want to go much further and believe that many if not most forms of “brain processing” could and should be remotely mechanised via brain implants. Machines, they claim, will be “smarter” (whatever that means) than humans, which includes being smarter in maths/logic than humans. They no doubt believe that this includes thinking and understanding maths/logic and indeed anything else, which is, as Turing observed, a meaningless belief that does not deserve discussion.(4)

    I am not sure why we need to be “technology-friendly”. There have been good, bad and indifferent technologies and the numerous uses of information technology are no exception. ITs, like other technologies, are value and power-laden. From the time when the likes of Bentham, Ferguson, Smith, Babbage, Airy, and Marx began to develop the concept of “machine intelligence” and discuss how it would become ubiquitous, it has for the most part been analysed within the context of who owns/controls it, under what form ownership/control, and for what uses. I see no reason why we should depart from this approach and be particularly friendly to the present owners and governments that control and seek to exploit AI as a powerful surveillance and control technology. Indeed, I think we should be possessively hostile to brain implants that are designed or could be easily used for these functions, and as such could be used to restrict our freedom to use our general-capacities.(5)
    I am certainly not technology-unfriendly, nor am I unfriendly to IT, but I am certainly unfriendly to Big Tech and the governments that seek to use IT/AI for surveillance and control purposes.

    1. I have made previous posts about Turing’s central claim that the belief that machines could think is too meaningless to deserve discussion. He nonetheless believed that humans would start to believe machines could think because language and educated opinion would be altered to the point where the belief was accepted as being meaningful. (See my post to Alberto Giubilini’s 17/4/23 blog)
    2. Advancing Gödel, Church, Post and his own work in ‘Computable Numbers’, (1936), Turing explored the use of intuition and ingenuity in maths and introduced his concept of the unrealisable (black box) ‘oracle machine’ in his ‘System of Logic Based on Ordinals’, (1939). Turing was right that this paper had not received the attention it deserved. Robin Gandy was also right that “it was a sinker to read” mainly because it was in the notation of Church’s lambda calculus which is difficult to understand but was well suited to Turing’s topic. Disputes around the formalisation and development of the calculus notation have a long history. As Florian Cajoral points out, “[t]here was no attempt to restrict the exposition of theory and application of the calculus to ideographs. Quite the contrary. Symbols were not generally introduced until their need had become imperative.” (The History of Notations in the Calculus, 1923)
    Some would dismiss the above and/or claim that machines will be able to replicate intuition and ingenuity or, so to speak, get around them. As yet they have not provided any convincing argument and/or evidence.
    3. We need not go so far as Thomas Tymoczko by, inter alia, rejecting proofs that are not surveyable/checkable. However, every effort should be made even if this can only done by interactive theorem prover machine/procedures (again these often require human intuition). We need also to consider Wittgenstein when he described how the mechanisation of proofs would turn maths into empirical experimental science. (On this point, there appears to be almost a meeting of minds with Turing as it would obviously alter the meaning of proof, mathematics, etc..) See Wittgenstein’s ‘Blue and Brown Books’ (1958) and ‘Remarks on the Foundations of Mathematics.’ (1956)
    4. Again, we need not go so far as Turing by terminating the discussion; but if AI researchers/owners persist in anthropomorphising their machines in this way, we should at least make what they perceive to be irritating corrections. Indeed, today’s AI, which is non-sentient, unconscious information processing on machines, may need human consciousness intelligence and understanding in the future as much if not more than it does now. Just as now thousands of poorly paid humans work as Mechanical Turk correctors and millions more unpaid humans correct AI LLM systems as they use them which (some might say) within “acceptable” failure rates, so brain implants could be used by AI operators for the same thankless and extremely irritating purpose of getting humans to correct their systems.
    5. The technology of brain-computer interface, or BCI, which collects electrical activity from neurons and interprets those relatively simple signals into commands to control an external device, has obvious benefits. However, even here it is being financed to the tune of hundreds of millions of dollars per annum by the US Government (e.g., DARPA and the NIH’s Brain Initiative) and Big Tech. As is so often the case, research to help disabilities is being funded by organisations with ulterior motives.

  4. On an extremely fundamental plain, or is that plane, my brother and I discussed mankind’s greatest problem. That discussion dates back twenty years, or more. My contention was fear was the problem. Elder brother did not think so. Now, after many years, I have amended my thinking. He, his, I think.
    I *fear* contextual reality, which he has begun to comprehend. I think. To say I believe he understands contextual reality, has the dog chasing its’ own tail or the worm, ouroboros devouring itself. Unfair to my brother and the dog. And, worm. Brother’s notion(s) have to do with, I think, conditioning and patterns. His degree was in Psychology, with a nearly completed double degree in Philosophy. Anyway, his ideas about;experience with conditioning and patterning are not so removed from what I assert of context. Simply put, my idea of contextual reality says we believe what we choose (um, free will). Moreover, we tend to believe what a peer group believes, There is strength in numbers, and most of us don’t want to be alone. There are countless peer groups; countless contextual realities. Here it is, then. Outsourcing, without fear, is irrelevant. It does not matter, where the final count is money.

Comments are closed.