Skip to content

Guest Post: ENHANCING WISDOM

  • by

Written by Darlei Dall’Agnol[1]

 

stephen hawking

 

 

 

Stephen Hawking has recently made two very strong declarations:

  • Philosophy is dead;
  • Artificial intelligence could spell the end of the human race.

I wonder whether there is a close connection between the two. In fact, I believe that the second will be true only if the first is. But philosophy is not dead and it may undoubtedly help us to prevent the catastrophic consequences of misusing science and technology. Thus, I will argue that it is through the enhancement of our wisdom that we can hope to avoid artificial intelligence (AI) causing the end of mankind. 

The first statement by Professor Hawking was made in the context of arguing that the nature of the universe could not be resolved without hard empirical data such as that provided by the Large Hadron Collider, the largest and most powerful particle collider in activity. This may well be the case. He then went on to explain “Most of us don’t worry about these questions most of the time. But almost all of us must sometimes wonder: Why are we here? Where do we come from? Traditionally, these are questions for philosophy, but philosophy is dead.” He finally concluded: “Philosophers have not kept up with modern developments in science. Particularly physics.”[2] It is difficult to see exactly what Professor Hawking meant, but he seems to subscribe to the dominant naturalist view, which asserts that scientific research and philosophical investigations are just continuous theoretical activities. This implies also that we could solve philosophical problems through scientific means. Despite the fact that this is a respectable view and that there is a grain of truth in it, I believe also that it misrepresents the real nature of philosophy, which has more to do with wisdom and not only with scientific knowledge. As it is perhaps well known since Aristotle made it clear in his Nicomachean Ethics, philosophy was born as love of sophia (wisdom) not of episteme (scientific knowledge) or techne (art). Thus, as it was a distortion in medieval times to make philosophy a slave of theology, it is a modern misrepresentation to reduce it to science, particularly to physics. Now, since wisdom is always needed for any person in any society, philosophy is alive and well indeed. This explains why Professor Hawking might also be wrong about his second prophetic statement. Philosophy is capable of showing how we can, if we are truly wise, avoid artificial intelligence causing the end of humanity.[3] That is to say, we need to enhance our wisdom not artificial intelligence only.

It is crucial to recognize that philosophy has above all to do with wisdom and not with mere knowledge or intelligence. Most scientific knowledge is used to increase human wellbeing, but it can also be used to kill, to destroy the environment etc. Bioethics and other areas of practical ethics were born precisely as a response to the threat posed by a positivist understanding of the sciences and their technological misapplications. As Potter envisaged it, bioethics was meant to be a bridge between the natural sciences and the humanities. He argued for the need of “a new wisdom,” that is, “’the knowledge of how to use knowledge’ for man’s survival and for improvement in the quality of life.”[4] Thus, despite the fact that what is wisdom is open to further discussion, it seems clear that it has an irreducible practical component in guiding actions in order to live well. Many philosophers, including myself, subscribe a hybrid approach to wisdom defining it as “knowing-how to live well.”[5] Thus, scientific knowledge is a necessary condition for wisdom so defined, but it is not a sufficient one. Further reflection of what constitutes a good life is also needed and physics has almost nothing to say on this particular point. It is here that philosophy, social sciences, ethics, humanities etc. find their reason to be: to discuss the ends of our political actions, the fair basis of our legislations, the uses and misuses of scientific knowledge, what constitutes our wellbeing and so on and so forth. If the issue was to blame each other, we could say that physicists such as Hawking “have not kept up with modern developments in philosophy, particularly ethics.” That is to say, Hawking’s naturalism may provide answers for the fundamental questions he raises, but they would have metaphysical assumptions not shared by other persons with a different worldview. Thus, it is philosophy understood as love of wisdom that can remind us of the limits of scientific knowledge opening the way to more pluralist approaches to the world and particularly to the good life. Consequently, despite Hawking’s apparent physicalism, we still need philosophical wisdom to guide us on all these issues, especially how to use scientific knowledge to live better.

I cannot sort out a complete account of wisdom here, but for my present purposes I would like to stress a significant difference between wisdom and intelligence, especially the artificial one. Again, what is intelligence is open to further discussion, but we must distinguish between a mere witty or smart person, who for instance is very skillful indeed in calculating means to achieve ends without wondering whether they are the right goals, and a wise person who is well experienced in deliberating about what is constitutive of the good life. In order to aim for the proper ends, again as Aristotle remind us, moral virtues are essential and not only theoretical knowledge. Now, the so called “artificial intelligence” is mostly calculative reasoning based on algorithms, a very limited kind of intelligence indeed. I really doubt whether what we call “social” or “emotional intelligence” can ever be encapsulated by algorithms. I also doubt whether there is an algorithm for the good life. Moreover, extensive knowledge is not synonymous with wisdom either.  To learn facts, facts and more facts sometimes is irrelevant to deciding which course of action we must take. It depends on our aims, not on gigabytes of information. Thus, propositional knowledge is not sufficient for guiding our lives. In order to be wise, we need to learn how to live well. A little of philosophical reflection is enough to realize that an exponential increase in the material conditions of life at the cost of destroying the environment does not leads to a better one.  Bioethics as a form of wisdom reminded us that science and technology can be only a means to the good life. To learn about the good life requires experience and practice too. Wisdom, then, shows us that quantitative knowledge is not sufficient.

There is, of course, a discussion to be made on how to enhance our wisdom. Would traditional methods such as education be sufficient or should we make, for instance, use of pharmaceuticals and other means? The debate around human enhancement such as physical improvement, psychological development, moral perfection etc. is relatively recent and much work needs to be done, especially from a critical point of view.[6] My modest contribution to this debate is here to call attention for the need of enhancing wisdom too. I do not believe that to enhance intelligence, especially the artificial one, without the wisdom of how to use it, is really good for us. Thus, despite the fact that I do not know of any drug to boost wisdom and even doubt whether it may even be necessary, I do see clear signs that our moral concerns towards AI are just in place thanks to sensible persons such as Professor Hawking himself. That is to say, a new field in Ethics, namely Machine Ethics, is already being developed to discuss the values that must guide autonomous robots boosted by AI. Thus, as I argued in a previous post (link: https://blog.practicalethics.ox.ac.uk/2015/06/guest-post-caring-robots/#more-11395), I think that respectful attitudes must guide our relationship with these autonomous robots once they are fabricated. This must be the case also among artificial agents themselves: they should not be created to destroy persons (natural or artificial). Thus, it seems clear that we have already started to find ways of placing moral constraints and limits to AI showing that Professor Hawking’s prophecy may hopefully never come true.

 

 

[1] Professor of Ethics at the Federal University of Santa Catarina. I would like to thank CAPES, a Brazilian federal agency, for supporting my research at the Oxford Uehiro Centre for Practical Ethics.

[2] http://www.telegraph.co.uk/technology/google/8520033/Stephen-Hawking-tells-Google-philosophy-is-dead.html

[3] http://www.bbc.co.uk/news/technology-30290540

[4] Potter, V. R. (1971). Bioethics. Bridge to the Future. New Jersey: Prentice-Hall.

[5] For further discussion on wisdom see: http://plato.stanford.edu/entries/wisdom/

[6] For a comprehensive book on these forms of enhancement see the book edited by Savulescu and Bostrom (2009): Human Enhancement (Oxford University Press).

 

Share on

6 Comment on this post

  1. Hi Darlie.

    Hawking has done some fine work in physics but has never displayed any knowledge or understanding of philosophy, not even in the philosophy science, mathematics and logic. His disinterest and ignorance was obvious when he wrote in the closing passages of ‘A Brief History of Time’:

    ‘In the eighteenth century, philosophers considered the whole of human knowledge, including science, to be their field and discussed questions such as: Did the universe have a beginning? However, in the nineteenth and twentieth centuries, science became too technical and mathematical for the philosophers, or anyone else except a few specialists. Philosophers reduced the scope of their inquiries so much that Wittgenstein, the most famous philosopher of this century, said, “The sole remaining task for philosophy is the analysis of language.” What a comedown from the great tradition of philosophy from Aristotle to Kant!’

    The above was a remarkably silly thing to write and he still continues in much the same vein nearly 30 years later. Hawking still does not understand philosophy which is why he cannot understand AI. If you think science became too mathematical for nineteenth and twentieth century philosophers, you are not, for example, going to understand the logical problems that became central to philosophy and the work of Wittgenstein and are still with us in the philosophy of AI and cognitive science.

    Part of the problem is that we should perhaps make a distinction between “machine intelligence” and “artificial intelligence“. Without going into the detail and discussing the limits of the distinction, the former has been with us for millennia and is certainly set to become more complex and ubiquitous; the later we are still pretty much in the dark about because ’we keep running up against the limits of our language’.

    Getting relatively simple machines to “appear” ever more intelligent is what AI has been doing for the last 60 years. One of the problems of programming machine with ethical rules or designing them to acquire ethical principles is that this may make them appear ever more intelligent when in reality they are not that smart. We usually, so to speak, build in ethics to machines to protect the user and others (VW do the opposite to this), but when it comes to advanced autonomous machine intelligence this process could be used to make machines appear more intelligent and less controlling than they really are. I am not following Hawking down the AI is going wipe out humanity route, but before we start declaring that the new field of Machine Ethics is going to save us from Hawking’s prophesy, can we perhaps think about what is being proposed and how this might be used by some if not most AI developers. (One of the differences between Machine Ethics and Computer Ethics is that for the most part the later never lost sight of the fact that computing (inc. AI) was a big business with powerful friends.)

  2. Thanks, Mr. Tayler, for your feedback.
    I have two comments and a question: first, I do agree that S. Hawking seems to know little of philosophy. Particularly, he reveals his ignorance when he states that, according to Wittgenstein, the sole task of philosophy is analysis of language and that implies a comedown from the great tradition of philosophy from Aristotle to Kant. He mistakenly takes Wittgenstein as a full-bloody linguistic philosopher. The remark you mention “running against the limits of our language” can in my view only be interpreted in Kantian terms. This means that Wittgenstein is setting limits to what can be said (propositions of natural sciences) and keeping silent in a non-quietist way on moral, religious, artistic values etc. This is a true sign of wisdom: a prudent silence scienfiticists fail to keep. That is why also S. Hawking is mistaken about his physicalism.
    Now, my second comment is this: I may have missed something important, but I do not see any significant difference you make between machine intelligence and artificial intelligence.
    Could you, please, say more on what you have in mind before I answer your point on machine ethics?

    1. What you ask for took the best part of PhD to explain but I will give the briefest of outlines.

      Machine Intelligence (MI) can be analysed within Heidegger’s concepts of Dasein, Equipment (das Zueg), Ready-to-hand (Zuhanden), etc., the works of Marx and Frankfurt School, etc., but that is more than I can do here and is very unfashionable these days. Another way of approaching it is through Adam Smith and the opening passage in the Wealth of Nations, where he identifies how a length of string could do the work of a boy pulling a lever on a steam engine, i.e., the division of labour (DoL) and automation. There is of course a direct link from this, through Gaspard de Prony, Herschel/Babbage, to modern computing. (Smith’s observation should not be confused with a similar definition of AI.) The DoL route is not necessarily an easy option in understanding MI as it must include Smith and Marx’s (et al) concerns about the DoL. With the advent of electronic digital computing it has become possible to produce MI that is capable of far exceeding the ‘intellectual’ work done by humans. That is not to say MI is the ‘same’ as human intelligence or that its development is directed towards an understanding and simulation of human intelligence. Information processing and storage are not claimed to be analogous to human thinking and memory. (MI could be described as ‘weak’ AI, but I have never been happy with Searle’s ‘weak/strong’ distinction) As ever, today’s MI is capable of doing immense harm if it is controlled by a powerful élite and, as ever, we should seek to understand it within economics, political and social power structures.

      Artificial Intelligence (AI) can be approached in much the same way as MI. Instead of the string and the boy we have a thermostat and heating system, except this time the thermostat is claimed to have ‘beliefs’ and/or to be ‘conscious’. (Because it is ‘bit’ machine, I describe this as the ‘bit conscious’ theory.) This is not the same as describing machines as having intelligence so long as we keep it within the limitations of the DoL. Roy Harris identifies how this theory can change our thinking.

      ‘Truth is not the only casualty once language is reduced to mental mechanisms. Knowledge is automatically devalued along with it. Thus once upon a time anyone would have been ridiculed who maintained that the clock in the square knows what time it is: on the grounds that, other than metaphorically, it makes no sense to attribute propositional knowledge to mechanical contrivances. But when nowadays a leading computer expert [John McCarthy] solemnly gives it as his view that ’the thermostat thinks the room is too warm in the same sense that a human might’, many will doubtless hesitate before dismissing that as nonsense. The hesitation is revealing. It is not that the humble thermostat nowadays basks in the reflected glory of the computer…What has changed is not our view of thermostats but our view of language.’ (Harris, Roy: ‘The Language Machine’, p.161.)

      (You might recall the Turing quote I gave in one of my posts to your ‘Caring Robots’ post)

      Without going into detail, the reason I make a distinction between MI and AI is that the latter has managed to change our understanding of language, thinking, knowledge, perception, etc. by claiming it and cognitive science has, so to speak, got behind language, thinking, etc. and discovered mechanical contrivances that are, or can be, digitalised and replicated. Of course some AI systems have had limited success but these systems could have been built without any of the accompanying flimflam.

      We have been able produce systems that could be programmed or acquire ethical principles for many decades, but, as is now being proposed, systems that ‘appear’ to be ethical will very quickly fool people into believing that AI has in some way understood ethics and brought it into one of its control domains. (Perhaps the problem stems from AI being a “black-box” technology which obviously makes it difficult to understand and does make it magical (in a bad way).) I have few concerns with designing MI systems that have been programmed to prevent unethical use so long as it is understood to be a relatively simple programming task, or in designing heuristic systems to “learn” ethics so long as the limits of the heuristics are understood. (Decades ago I used to get my students to make preliminary designs for an ‘ethical’ enhanced medical expert system.) The big problem is humans quickly allow machines to do their ‘thinking’ for them and we could get, “The computer has done this or says this is right, so it must be right.” Indeed, in our litigious world we can envisage situations where humans allow machines (e.g. medical ESs, autonomous vehicles) to do harm because they know if they take back control to stop the harm they might be sued if they failed and did cause harm. It is a knotty problem that has been with use in one form or another for centuries.

      My concerns about AI are undiminished because I do not like bad science and scientism. There are too many scientists that have unthinkingly been taken in by the AI hype and seem incapable of understanding the technical and philosophical problems AI is unable to resolve or ignores. I am minded of eugenics and how that held sway over scientists, philosophers, authors, politicians, and the media. (Eugenics has of course been stripped of its history and been relaunched as new friendly ‘enhancement.’)

  3. Thanks, Mr. Tayler, for your remarks.
    As far as I can see, your distinction between machine intelligence and artificial intelligence does not affect what I said on Machine Ethics. My main point was that we must construct artificial agents embodied with values and norms based on, for instance, reciprocal respect (both natural/artificial and artificial/artificial agents). Now you mention Computer Ethics and you are right to distinguish it from Machine Ethics (or even from Roboethics), but I think this is a specific issue related to particular professionals only. The central argument in my post is that wisdom is not intelligence only and that it can prevent us from the misuses of AI. Now I would like to add: even if AI has powerful friends. I might, of course, be wrong or sound too optimistic and perhaps even naive. What I can say in my defence is that if artificial intelligence is allowed to construct self-replicating robots capable of destroying mankind leading to a post-human world, it will be a proof that homo sapiens, as we picture ourselves and our species, never really existed or that we failed to keep a proper place for philosophy, that is, for wisdom in our lives.

    1. Daniel

      Yes, as I said, so long as the ethics encoded in the machine does not give it the appearance of ‘being’ ethical and intelligent. That is by no means easy because people quickly become “epistemically enslaved” by systems and let them do their thinking for them. I do not know of many AI researchers that see this as a problem; indeed, most of them seem to have adopted Dennett’s maxim, ‘The AI researchers’ answer is, Build it and see’. What we will see are AI researchers, scientists and media pundits pointing at an ‘ethical’ robot and claiming it is another breakthrough in AI’s quest to develop systems with super-intelligence. As Smith understood, DoL (especially at this level) restricts human intelligence and ethical judgement and makes him/her “as stupid and ignorant as it is possible for a human creature to become”. Before we start building and seeing, we should perhaps use our wisdom to consider what these systems are doing and will do to us.

Comments are closed.