Skip to content

It is not about AI, it is about humans

Written by Alberto Giubilini

We might be forgiven for asking so frequently these days whether we should trust artificial intelligence. Too much has been written about the promises and perils of ChatGPT to escape the question. Upon reading both enthusiastic and concerned accounts of it, there seems to be very little the software cannot do. It can provide or fabricate a huge amount of information in the blink on an eye, reinterpret it and organize it into essays that seem written by humans, produce different forms of art (from figurative art to music, poetry, and so on) virtually indistinguishable from human-made art, and so much more.

It seems fair to ask how we can trust AI not to fabricate evidence, plagiarize, defame, serve anti-democratic political ends, violate privacy, and so on.

One possible answer is that we cannot. This could be true in two senses.

In a first sense, we cannot trust AI because it is not reliable. It gets things wrong too often, there is no way to figure out if it is wrong without doing ourselves the kind of research that the software was supposed to do for us, and it could be used in unethical ways. On this view, the right attitude towards AI is one of cautious distrust. What it does might well be impressive, but not reliable epistemically or ethically.

In a second sense, we cannot trust AI for the same reason why we cannot distrust it, either. Quite simply, trust (and distrust) is not the kind of attitude we can have towards tools. Unlike humans, tools are just means to our ends. They can be more or less reliable, but not more or less trustworthy. In order to trust, we need to have certain dispositions – or ‘reactive attitudes’, to use some philosophical jargon – that can only be appropriately directed at humans. According to Richard Holton’s account of ‘trust’, for instance, trust requires the readiness to feel betrayed by the individual you trust[1]. Or perhaps we can talk, less emphatically, of readiness to feel let down.

 

Trusting humans and relying on tools

Readiness to feel betrayed or let down seems to presuppose attribution of responsibility for action to the individual we trust. This requires considering them, in some important sense, autonomous and conscious agents. We can only feel betrayed or let down by humans because only humans have the kind of autonomy and the level of consciousness necessary for attribution of responsibility. Or at least this is what most of us would assume. We cannot feel betrayed or let down by tools because we cannot hold tools responsible for failures (I ask those who think that they trust their car to hold their fire for now).

Consider the following example, taken again from Richard Holton. We are rock climbing and I can decide whether to use a rope or to take your hand to get on top of the rock. A rope is just a kind of technology, as is AI – it is a different level of sophistication, but for the purpose of rock climbing, it has all the sophistication that is needed. My attitude in the two cases is different. Even if I have reasons to think you and the rope are equally reliable, I have additional reasons to take your hand compared to the reasons to grab the rope. My reliance on you is accompanied by trust, whereas my reliance on the rope is just that: mere reliance.`I would not feel let down or betrayed by a rope. But I will feel let down or betrayed by you if you give me your hand and then you fail to make the effort to lift me up.

 

Attributing human features to AI

So whether we can trust AI turns on whether AI is just a tool or something more human-like. And this gets to the core of our relationship with AI. Many keep raising the possibility that AI can be really creative, or  conscious, in ways that resemble or indeed replicate the same features in humans. If AI could really be creative and conscious, then also the kind of autonomy we attribute to artificial autonomous agents would closely resemble the autonomy we attribute to humans. But these are vaguely framed possibilities.  We’d need first to agree on what it means to be autonomous, or creative, or to have consciousness to establish if AI possesses these features. We will never agree on that.

Lack of shared definitions is not a huge problem for most practical purposes in our everyday life. We think of these properties as eminently human and we can confidently say that humans are autonomous, creative, conscious, without having to think much about definitions. At most, we can have doubts about humans in specific circumstances (for example, in the case of some severe disabilities) and developmental stages. That is where definitions matter and disagreement might arise. But these are exceptions.

With AI, we would need definitions in order to decide whether these features can be attributed to AI or to our relationships with AI.  We would need to think carefully about what eminently human features like autonomy, creativity, consciousness actually are. Thus, when we ask whether machines are autonomous, have responsibility, are creative, are conscious, we are really asking questions about human features.

The same goes with trust. We have no doubt that we can trust – or distrust – other humans. We normally trust other humans without having a definition of trust. Quite simply, trust naturally happens between humans.  True, sometimes we talk of trusting our car or trusting our dog (or, more controversially, our cat). These are mostly ways of anthropomorphizing certain objects or animals. We tend to see pets as our friends. Perhaps more bizarrely, we have a tendency to anthropomorphise our cars, as tools that we heavily rely on for carrying out basic daily activities. But trust in pets and cars exists precisely to the extent that we anthropomorphize them. We would not say that we trust a wild animal or the bus we are taking, because it is more difficult to see them as human-like.

 

What does trust have to do with AI?

So, unsurprisingly, whether we can ‘trust’ AI depends on how we define trust. That is probably the best type of answer a philosopher can offer. But the implications of the answer are more meaningful than it might initially appear.

Any definition of trust that would allow us to say that we can trust (or distrust) AI needs to be consistent with the way we use ‘trust’ to refer to attitudes towards humans. Otherwise, we would not be applying trust to our relationship with AI. We would simply be changing the meaning of a term and forcing language upon people’s everyday communicative exchanges.

A definition of trust needs to take into account both the features of the trustor and of the trustee. We usually think of both trustors and trustees as humans (or as anthropomorphized objects or animals). Now AI seems to challenge the idea that either the trustor or the trustee need to be human for a relationship of trust to occur.

How can we ‘trust’ an artificial agent in the same sense as we trust a human? Even more problematically, can AI itself be a trustor? When it comes to artificial agents, trust can be thought of either as a relation between a human and an artificial agent (should I trust ChatGPT?) or as a relation between two artificial agents in an integrated system. For example, Maria Rosaria Taddeo characterizes e-trust, understood as the trust between two artificial agents (AAs) that need to collaborate with each other in an AI integrated system, as follows:

“the AAs calculate the ratio of successful actions to total number of actions performed by the potential trustee to achieve a similar goal. Once determined, this value is compared with a threshold value. Only those AAs whose performances have a value above the threshold are considered trustworthy, and so trusted by the other AAs of the system”[2]

One might wonder if it is still trust that we are talking about, or if we have just changed the meaning of a word that we use to describe certain human relationships to make it fit AI relationships. For that does not seem to be the kind of things going on when I trust someone. For example, I don’t perform any calculation. Often I would just follow my gut feelings. Sometimes I would rely on social or legal norms, whereby for instance I trust a mechanic to fix my car.

Take again the rock climbing example. I could calculate whether the rope can hold me, on the basis of the laws of physics. An artificial agent would rely on that calculation, and it would probably be better than me at that. But neither an artificial agent nor I could calculate whether you can hold me. Grabbing your hand is about trust, not about calculation. In principle, I might be able to calculate that your muscles can exert the amount of effort required to hold me. But I cannot calculate whether you will be willing to put that effort in it for me.

What Taddeo describes might well be the way two AI systems interact when they need to rely on each other to perform a task, but what is the point of using the “trust” terminology?

We could simply say that we rely on AI, as we do with any other tool. Unless we think that AI is really not just a tool, but it is closer to a human being. That is precisely the issue I want to raise in conclusion.

 

Conclusion

What is it, exactly, that we are doing when we try to apply a terminology referring to eminently human dimensions to artificial agents? Are we just changing the meaning of words like ‘trust”?

If yes, if trust as we know it is really something different from the trust that some people think can be placed in AI, why do we want to use the same term? Perhaps this is just a concealed attempt, or an unconscious tendency, to anthropomorphize a technology.

And if it is the same thing, if we can trust a machine in the same sense as we trust a human, what does that tell us about human relationships of trust? More generally, what is left of being human if things like trust, autonomy, creativity, consciousness, morality are transferrable to machines without any loss of meaning in these words?

The suspicion is that talk of trust in AI can reveal more about being human and about different dimensions of human experience than about these technologies or our relationships with them. As AI becomes increasingly more integrated into our lives, we might paradoxically get a better understanding of what it means to be human by asking ourselves whether, to what extent, and in what sense human dimensions are actually transferrable to technology. Asking questions about trust in AI is a way of asking questions about the nature of our trust in each other.

Perhaps this is the most important thing that AI can do for us: help us figure out what it is so distinctive about being human. And it is a question we cannot trust ChatGPT to answer.

 

[1] Holton, R. (1994). Deciding to trust, coming to believe. Australasian Journal of Philosophy72(1), 63-76.

 

[2] Taddeo, M. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust. Minds & Machines 20, 243–257 (2010)

 

Share on

9 Comment on this post

  1. ” There you go, friend; keep as cool as you can. Face piles of trials with smiles. It riles them to believe that you perceive the web they weave…”.
    —The Moody Blues, half a century ago.

  2. Trust is an interesting measure which is capable of a great deal of interpretation.
    Trust here appears mainly as a focused and finite qualitative measure. In my opinion the focus is the problem with AI, as with many systems. People either do or do not trust their university (employer) to pay then at the relevant intervals. They either trust or not the tax authorities/adviser to identify the correct amount of tax for them to pay. Trust of various levels is put in the car manufacturers to to produce vehicles of a particular quality, journals of various sorts to produce material of an acceptable quality, the same with rope manufacturers. (i.e. Software developers have very varied trust levels with bugs.) All of those varieties of trust apply different breadths of qualitative considerations across differing points of focus, turning the whole into some form of quantitive measure.
    AI may have some quantitive measures and points of focus identified sufficiently to produce material of a certain quality, it may have real time data updates which outstrip any human ability to keep up but the wider qualitative issues are often not there. A not dissimilar issue is that humanity at times remains caught within its own definitions and associated measures, being most unwilling to change them enough to allow strange or different forms of thought a comprehensive definition, or accommodation within existing ones. As with intelligence, clearly the existing definitions for intelligence are perhaps insufficient to accommodate AI processes which currently rely mainly upon the available input being regurgitated within a set of given parameters (including the creative arts), as much early schooling does.
    With humans certainly being involved perhaps a more enlightening thread would be to consider the power flows in the construction and use of AI. It appears to have been accepted that successful developers and programmers would gain a very significant increase in power, whereas most other areas are likely (by todays concepts) to suffer a significant decline. It is accepted that current social constructs will change significantly as AI develops. So what would happen to things like democracy? AI when sufficiently developed and personalised will potentially have a response to that. But perhaps the best it can do at the moment is the same as ChatGPT, working with a very large list, compile a focusing output which may or may not be coherent or capable of being developed in a way which people would trust beyong the actual output material.
    And it is that focused output which creates the current dilemma, because trust is already being placed in the developers and programmers to ensure the list is complete for the purpose which it is intended. That trust already exists, even if it is only by the developer themselves in their own knowledge and skills. How much trust is placed in Commercial or Open Source software to deliver bug free the functionality required. How much trust is placed in those companies for privacy or security? Those same questions exist for developments in AI but they are rarely openly discussed because that involves people, and as the article indicates the focus is on AI not people.

    1. Exactly? Only so if people are always of prime concern and the precedence given to control or power is reasonably high.
      Where ideas are the prime motivator and only the most rigorous ideas win through, other considerations will frequently take a back seat.
      Admittedly both of those comments are not the only worldviews, all are complicated and not always included within each others considerations.

  3. Technology has always and will continue to alter and change the meaning and foundations of our language, intelligence and lifeworld. For the most part, this has been a beneficial process of creating technologies that are subsumed into and shape our lifeworld. There is course a more extreme end of technological rationality that radically disrupts, colonizes and dominates the lifeworld and distorts our language. The developments of advanced computing, stochastic theory, information theory, control theory, systems theory, cybernetics, behavioural and cognitive psychology, eliminative materialism in the 20th century have increased the speed and effects of the process of change. AI R&D has developed slower, but what it lacked in pace it has more than made up for with the myths and hype it has generated in the last 70 years.

    The example you give of Taddeo’s characterisation of e-trust is rooted in the above. The AA’s “calculations” are a simple cybernetic explanation of a system but would require further explanation of how the system produces or is given its goals, values and thresholds. A much older philosophical problem is whether we can *trust* machines that do mathematics. Wittgenstein, for example, wrote extensively on the problem of the differences between humans and machines following rules and making mistakes, how machine mathematics machines make mathematics an experimental science, and how “black box” (non-surveyable) machines that prove, calculate, model, generate, etc. will alter language and reason. (1) In the 1970s there was some interest in Haken’s et al machine “proof” of the Four Colour Conjecture. The mathematician Yuri I Manin concluded that “a proof only becomes a proof after the social act of ‘accepting it as a proof.’” This social usage argument could be what Alan Turing was describing when he predicted.

    “The original question, ‘Can machines think!’(sic) I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.”(2)

    Turing was reported by Robin Gandy as having great fun when he wrote this paper and it certainly is somewhat “eccentric” and confused. In brief, he was a behaviourist/cognitivist who appeared to believe that the ‘human as an information processing machine’ paradigm would be so widely accepted by people within 50 years that although general educated people might still casually talk using folk psychology to describe humans and machines ‘thinking’, they would by then realise that “behaviour” displayed during so-called thinking can be more accurately measured and analysed by the ability of the human or machine to pass his imitation game (presumably levels of performance could be tested by progressively more difficult tests). If asked to answer your question, ‘what is the point of using the “trust” terminology?’, Turing would have said ‘none at all’. However, I think Turing would be disappointed by the number of general educated people who are still using folk psychology. Perhaps he did not count on the level of confusion that has been in part created by AI hype and the promotion of ascribing mental qualities to machines and psychological naivety by AI researchers. (3) Turing may have thought that philosophers would have taken an interest in MI, but with a few notable exceptions, philosophers and philosophers of science have ignored MI/AI, it often being dismissed as “mere technology”. (4)

    I am of course in broad agreement with the points you make in your conclusion. But, as the above *very* brief and rough description of the issue may suggest, I think the damage has been done and it may be too late for us to, as you put it, “figure out what it is so distinctive about being human”. That is not because I believe AI and systems like ChatGPT are going to achieve AGI or super-intelligence. They are not because systems like ChatGPT are massively hyped but relatively simple programmes with access to a mass of data that generate quite a high degree of “nonsense”. (They have their origins in early 20th century Markov chains and the work of Shannon, Wiener, von Neumann, etc.. If and when these systems start to process their own generated data output as input “training” data, the percentage of what humans identify as “nonsense” will increase. If the amount of generated data on the WWW are in this feedback loop becomes too great, the level of nonsense will rapidly increase until the system collapses.

    In short, you are obviously right that we cannot trust ChatGPT to answer the question you ask. (5) The problem is it will be “answered” by the owners of “tech” corps, AI researchers, governments that use the technology for surveillance/control, the media and people who have been “educated” to accept, as Herbert Simon put it, “not how things are as science presents it”, but “how things ought to be” as technology presents it. (5)

    (1) Much of this work was published in ‘Remarks on the Foundations of Mathematics by Ludwig Wittgenstein.’ (1974). It should also be noted that Martin Heidegger was discussing machine intelligence and the ways it will change language and human thought long before the dawn of AI.
    (2) Turing, Alan. Computing Machinery and Intelligence (1950) Mind, Volume LIX, Issue 236. P.7. (3) John McCathy, who came up with the term “artificial intelligence” as a promotional gimmick in 1956, wrote an influential paper entitled ‘Ascribing Mental Qualities to Machines’ (1979) We are supposed to describe machines with mental qualities (folk psychology) but at the same time we are chided for using folk psychology to describe our minds which should be described in machine terminology!
    (4) Considering its foundations and methodology, Anglo/American analytical philosophy has been woefully neglectful of the issue. Many Continental philosophers were quick to understand what was happening but, for various reasons I cannot address here, their work lost momentum and was mostly ignored or misunderstood by analytical philosophers. (Ryle’s plagiarism of Heidegger’s ‘Being and Time’ in his ‘Concept of Mind’ was an attempt to import Continental philosophy.) We should also remember that the neglect of technology by philosophy began in the 20th century, it being a major concern in previous centuries. According to Wiener, Leibniz was “the patron saint of cybernetics.” Bentham believed that his Panopticon could radically alter language and with it *reality.* La Mettrie, Ferguson, Smith, Babbage, Ure, Marx, etc. wrote extensively on MI and technology would alter human intelligence.
    (5)It was this type of question I set my undergrads 40 years ago. They are, so to speak, the Turing “strawberries and cream” questions that reveal the absurdity of his conjecture and methodology.
    (6) Simon, Herbert. ‘The Sciences of the Artificial’, (1969) MIT Press. Simon’s concept that science determines what is reality is of course crude, but the shift he advocates to the (science!) of artificial does describe the world-view of so-called AI.

  4. If as stated philosophy and science began truly neglecting technology in favour of pure science in the 20th century, leaving political will and market forces to determine outcomes it could be argued that during that time a new genre of fiction took philosophies place (a genre which grew rapidly during that era), so it would be wrong to assume by extension little thought was applied to technological change or its possible implications to humanity. (problematically, due to market forces that fiction was of a popular type). That more popular focus would appear a likely cause to the myths and hype generated over the last 70 years creating existing circumstances causing tensions for those attempting to direct technological thought in a particular way, and if that is an accurate observation reveals a value for truly free philosophical thought.
    Restricting the interpretation of lifeworld as the physical environment and worldview as the representation/presentation of individuals emotional and mental life – Ordinarily civil unrest at various levels can occur during changes in the lifeworld. Changes in worldviews can be equally traumatic, but may also be more limited. The main thrust regarding AI at the moment appears to be being generally directed by those interested parties who are passionate about some facet of the subject or outcome, so ignoring the wider human scale often appears as a problem. Other spheres of philosophy will begin to become more interested in AI as it develops further because of the associated issues. Many of the physical sciences perhaps not so quickly if they maintain a pure focus on assistive technology, but are being increasingly drawn towards AI. Certainly ethics/morality is/will play a leading role during these times, sometimes oddly leading the way, and not always being correct for the new environment.
    Keith observes that AI will overload itself, an observation appearing to arise out of concepts of human abilities to manage data/information. That is the same as saying that the AI creators/developers will not eventually be able to manage what are obvious focal difficulties. Something of a Kuhnian mix.
    For all the joking about Turing, people today do commonly speak of machines thinking, and do not expect to be contradicted. The whole debate about AI, together with the promotion and hype for AI creates and supports that muddy situation where many of the clearer definitions emanating from that area can, not unsurprisingly in a developing area, be seen to lack broadly based substance. Indeed the very calls of Alberto and Keith for a definition of being human would reverse AI’s requirement to determine itself as thinking and feeling by providing a clearer specification for the programmers, unless that question is strictly limited to considerations by individuals about themselves and their own ethics/morality. As illustrated turning a blind eye to different perspectives only ensures all perspectives are not fully informed, assuring a less than complete answer, and results in thought processes inclined towards resolving problems by simply turning them off by various means when they do not work within perceived parameters.

  5. Giubilini’s title cinches together much of the commentary on this topic. I have written about a number of issues , mentioning interests, preferences and motives. Anyone who has no personal or professional stake in AI and its’ future, whatever that holds, is not going to be interested in what someone else says or claims it holds. No interest=no preference=no motive. I , unlike some others thrashing about now, am not anti-science. I DO scrutinize and consider what some have called overreach.

  6. Hi Ian

    Some of my research into so-called AI has been concerned with how it has been portrayed in fiction and how philosophers have understood, or more often misunderstood, these fictional accounts. I’m not prepared to let analytical philosophy off its neglect of AI/MI because fiction has in some sense filled its place. Again, during the preceding centuries, philosophers engaged with fictional accounts and were of course quite active in providing their own ideas of how technology might or should be developed and how it might transform society and human existence. Analytical philosophy has for the most part rejected any form of activism or speculative thinking about the future. Indeed, although I’m opposed to Herbert Simon’s thinking and predictions of how things ought to be as technology presents it, I accept his general criticism that science and philosophy are, so to speak, operating in a vacuum of their own making.

    It is true that ethics underwent a practical turn with the emergence of medical ethics and, as Stephen Toulmin put it, saved the life of ethics. However, from my experience of teaching and researching computer ethics some 40 years ago and my knowledge of other practical ethics like bioethics, I’m not convinced that ethics can do much more than provide PR for AI/MI as it continues to be established as the major surveillance and control technology. Every “tech” corporation and government employs its ethicists for as long as they make the right noises.

    My observation that stochastic systems like ChatGPT may begin to generate an increasing amount of “nonsense” (i.e., noise and uncertainty), if and when they start to use their own output as “training” input data, is grounded in the mathematics of communication (information) theory, control theory, cybernetics, etc., which are themselves grounded in physics theories such as thermodynamics and statistical mechanics. (1) In order to prevent this the systems will have to be prevented from training on their own data and/or will have to have other systems to correct the data. This creates numerous problems, not least of which is where do these executive systems get their data to do the corrections?

    Not sure about your last paragraph, perhaps I did not make myself clear. Turing was doing the joking. However, he was quite clear in his paper and elsewhere that the question ‘Can machines think?’ is too meaningless to deserve discussion. I also believe it to be meaningless but I’m concerned that the ramifications of our belief are as usual ignored by most philosophers and just about everyone else. The fact that Turing’s prediction was right that people and indeed AI “experts” are now using *meaningless* folk and naïve psychology when describing machines would be no comfort to Turing. Distinctions and clarity are essential not just a courtesy. These issues cannot be discussed here, but suffice to say, regardless of our opinions on whether AGI is possible, we must recognise that advanced computing/MI may radically change our world because at the very least it will give the *controllers* of the technology unbridled power. As I have said before, I fear it may be too late for such a *philosophical* discourse. I’m inclined to take the long view like Zhou Enlai, it’s too early to say what are the effects of earlier Industrial Revolutions. (I prefer Machine Revolutions.) Unfortunately, philosophy has done very little to improve our understanding of them and is now totally unprepared and out of time for the next Machine Revolution.

    (1) These theories have been *applied* to human communication and carry some explanatory weight. I certainly do not use, as you put it, “concepts of human abilities to manage data/information” when analysing AI systems because it is quite obvious that the processes that produce MI have little to do with human intelligence and should not be analysed using folk psychology until it comes to understand how different and *alien* MI is to human intelligence.

  7. Keith says “science and philosophy.. in a vacuum of their own making” Potentially but writers like Bergson and Feyerabend provide a different insight which would turn that into …vacuum of mans own making, reflecting a constraint which is surmountable yet not limited in the way applied. (See further on this area below.) My comment about fiction during the 20th century did say – by extension – rather than – instead of – as you appear to have interpreted it.
    During the following, another AI teaching, or controlling AI debate is ignored because the issues become very similar, only the speed changes.
    Yes corporate ethicists, the same as corporate lawyers, tread a fine line by which they dance around the subject in a way that never really pierces the essence of the issue but rather projects a particularly favoured response suitable for their organisation, and whilst many are happy at that, academic ethics, in my opinion, should challenge and be challenged rather than be paid to visualize but often actually only project. The main alternative of faith based systems (including science), generally attempt to retain a more rigidly controlled, but probably because of their longevity, broader frameworks than corporate perspectives.
    Many times the issue of individual character is thrown out as part of these debates, so the character AI apparently exhibits becomes of concern to those who adhere to that concept. The basis of that argument appears in the nature v nurture debates which more often become rife regarding education. In my worldview those facets are time related and often change during the course of events and environment (individually chosen or not) directly altering the worldview. For example certain character traits will be attracted to social groups which support part of their own worldview, take for an example the ‘Trump’ issue in the USA which forms one worldview containing many others, with many members sharing what could be called a particular character trait. Information, learning, reflection, and time will often change/alter those worldviews and affect what may be called character traits. That change will not happen if the individuals involved do not become exposed to external material, or accept any of it, which is where it seems to me both our arguments so far fail regarding the nonsense created by inwardly focusing systems. As such human interests outside of the scope of the immediate interest of individuals (things seen as different) enriches and improves the individual allowing greater insight and creativity to flourish across all areas – one could argue that is why the arts become so important. Jacques Rancière in his at times almost poetical book ‘The Future of the Image’ gives some insight into how those areas can then affect and extend peoples(artists and audiences) understanding and involvement. Rancière observes “And if there is nothing other than the image; the very notion of the image becomes devoid of content. Several contemporary authors thus contrast the Image, which refers to an other, and the visual, which refers to nothing but itself. This simple line of argument already prompts a question. That the Same is the opposite of the Other is readily intelligible. Understanding what this other is is less straightforward.”
    AI whilst it can currently produce artwork suitably reflecting different artists work, methods and styles does not realistically yet, on its own create those links, so it seems to me that is the issue you illustrate with the comment about introverted AI/MI. External input can reduce the incidence of nonsense by allowing other nonsense to intrude facilitating the mind, if it wishes, to perceive and by thought, comprehend. But for many releasing AI into the ‘wild’ to inform and develop itself is perceived as hugely dangerous because the human control element becomes weakened or negated because the artificial mind would be perceived to be seeking out and informing itself more widely, allowing creative challenges to be made against original material which the creators themselves may no be aware of or able to refute.
    We will have to differ on the issue of the processes producing the information having little to do with human intelligence. From my current perspective human intelligence produced them and will itself (still at this time) be somewhat altering and feeding its own product, therefore, as the original article states, it is about humans, (I would say… also about humans) but as in the point you raised, more often in the narrower corporate worldview sense than that of a broadly common humanity of differing social beings.
    As to the observation about medical ethics saving the life of ethics. Considering the different areas of focus applied above and the observations made would cause me to dispute that. It may have firmly grounded ethics within a purely human context, but if that is the whole focus it has also significantly limited it. As I suspect the whole AI subject area discussion in the wider world is probably beginning to bring to light. There is also a very significant risk that during any considerations of widening the scope of medical ethics a more restrictive framework could become generally applied, which at times may be seen as politically advantageous, but mirrors one of the concerns regarding AI.
    Alberto has been very patient in allowing such a long winded discussion so I thank him for the original article stimulating this, and his patience during the discussions.

Comments are closed.