Skip to content

Cross Post: What’s wrong with lying to a chatbot?

Written by Dominic Wilkinson, Consultant Neonatologist and Professor of Ethics, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Imagine that you are on the waiting list for a non-urgent operation. You were seen in the clinic some months ago, but still don’t have a date for the procedure. It is extremely frustrating, but it seems that you will just have to wait.

However, the hospital surgical team has just got in contact via a chatbot. The chatbot asks some screening questions about whether your symptoms have worsened since you were last seen, and whether they are stopping you from sleeping, working, or doing your everyday activities.

Your symptoms are much the same, but part of you wonders if you should answer yes. After all, perhaps that will get you bumped up the list, or at least able to speak to someone. And anyway, it’s not as if this is a real person.

The above situation is based on chatbots already being used in the NHS to identify patients who no longer need to be on a waiting list, or who need to be prioritised.

There is huge interest in using large language models (like ChatGPT) to manage communications efficiently in healthcare (for example, symptom advice, triage and appointment management). But when we interact with these virtual agents, do the normal ethical standards apply? Is it wrong – or at least is it as wrong – if we fib to a conversational AI?

There is psychological evidence that people are much more likely to be dishonest if they are knowingly interacting with a virtual agent.

In one experiment, people were asked to toss a coin and report the number of heads. (They could get higher compensation if they had achieved a larger number.) The rate of cheating was three times higher if they were reporting to a machine than to a human. This suggests that some people would be more inclined to lie to a waiting-list chatbot.

Hand tossing a coin
The rate of cheating was three times higher when reporting a coin-toss result to a machine. Yeti studio/Shutterstock https://www.shutterstock.com/image-photo/hand-throwing-coin-on-white-background-1043250901

One potential reason people are more honest with humans is because of their sensitivity to how they are perceived by others. The chatbot is not going to look down on you, judge you or speak harshly of you.

But we might ask a deeper question about why lying is wrong, and whether a virtual conversational partner changes that.

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.

Fairness

Of course, lying can be wrong for reasons of fairness. This is potentially the most significant reason that it is wrong to lie to a chatbot. If you were moved up the waiting list because of a lie, someone else would thereby be unfairly displaced.

Lies potentially become a form of fraud if you gain an unfair or unlawful gain or deprive someone else of a legal right. Insurance companies are particularly keen to emphasise this when they use chatbots in new insurance applications.

Any time that you have a real-world benefit from a lie in a chatbot interaction, your claim to that benefit is potentially suspect. The anonymity of online interactions might lead to a feeling that no one will ever find out.

But many chatbot interactions, such as insurance applications, are recorded. It may be just as likely, or even more likely, that fraud will be detected.

Virtue

I have focused on the bad consequences of lying and the ethical rules or laws that might be broken when we lie. But there is one more ethical reason that lying is wrong. This relates to our character and the type of person we are. This is often captured in the ethical importance of virtue.

Unless there are exceptional circumstances, we might think that we should be honest in our communication, even if we know that this won’t harm anyone or break any rules. An honest character would be good for reasons already mentioned, but it is also potentially good in itself. A virtue of honesty is also self-reinforcing: if we cultivate the virtue, it helps to reduce the temptation to lie.

This leads to an open question about how these new types of interactions will change our character more generally.

The virtues that apply to interacting with chatbots or virtual agents may be different than when we interact with real people. It may not always be wrong to lie to a chatbot. This may in turn lead to us adopting different standards for virtual communication. But if it does, one worry is whether it might affect our tendency to be honest in the rest of our life.

The Conversation

Share on

3 Comment on this post

  1. This documented and raised a pertinent question about lying. If a chatbot is configured in such a way that the material it presents to its user in the questions posed are misleading, is that seen as a lie?
    If a chatbot is perceived to be configured in such a way, does any response given it in order to negate or balance that lie, count as a lie?
    This leads into the question – do lies exist independently of any direct human communication (think scientific data misleading to the senses)? Also, within worldviews, truths may be spoken, but be received by other worldviews as lies. Consider in the medical example given, the person whose pain has increased significantly, because of a change in pain killers by their doctor (who is attempting to assure fairness for a virtuous patient who would otherwise unduly suffer).
    These partially reflect the observation made: “Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)”
    So, moving beyond the systemic politics and damage done, what causes create lies, and what do lies often really consist of in the context where they are identified?

    As a general observation, given the proclivity to knowingly lie/unknowingly repeat lies, in many existing social worlds, those who only speak in truths across communities commonly seem to suffer more than liars, until virtuous considerations become holistically perceived/applied.

  2. Sometimes, I “just don’t get it”. Other times, when I DO, it makes me irate. I think and have remarked that the chat bot phenomenon is abrogation of ethics, responsive consciousness and common decency. Yet we see how the trend flows into popular culture. For my part, I see this as cultural ferment. Natural fermentation is a good thing, producing a variety of healthy food products. Cultural ferment is spoilage and causes upheaval of many sorts. I can’t consider lying to a chat bot unethical or immoral. How someone else thinks of this is their affair.

    1. There are worldviews who consider cultural fermentation no more than creating a foment.
      That perspective may be countered by those worldviews by pointing out the variety of outcomes, but is problematic when too much heat is applied, or a very broad spectrum of the public is consistently and continuously fed the same material. It then borders upon/becomes brainwashing and begins to only rely rather soullessly upon transactional dealings.
      Are ethical approaches which agitate large bodies of persons by brainwashing ethical for all of the worldviews involved? e.g. do they allow for all freedoms, or are they focused upon only one.

      Take what may be seen as politically correct approaches: If they are developed by an individual in response to increasing amounts of knowledge, insights and comprehension they have gained, that would seem to allow for their freedoms, and would anyway appear as a natural outcome (unless considerable self-discipline were maintained to avoid that outcome); provided the freedom to gain knowledge in widely disparate areas were allowed/facilitated. The way that type of PC is then freely exercised by the individual becomes a telling thing about that individual. (Think character traits, worldviews and developed/ing know how about methodologies.) Think or the term used ‘responsive consciousness’.
      Focused now upon large populations. And mechanisms and processes which were previously limited in scope to circumscribed populations or/and refined subject areas, when their use is mandatorily required and automated, can become aggressive tools for many worldviews, because they remove many freedoms of expression otherwise enjoyed. (Consider part of the resistance Google experienced).
      Now considering the values of those directing the collation/development of much automated content and the sheer variety of expressed values becomes reduced. Back to what was said previously, and it is not fermentation, but rather fomentation which occurs. Looking then to the naturally human question of how is resistance avoided plays into the personalisation of services which itself creates inequalities and themselves are then open to manipulation by those so willing and able to deal transactionally.
      The outcome becomes:
      Do you wish to be manipulated in ways you may not be aware of as;
      part of a social group, an individual, or both?
      Historically most people have shown they would rather not be secretly manipulated, and clear feedback mechanisms, often generated in various ways during human contact have been used to reduce directions of travel which would allow/increase such manipulation (responsive consciousness and common decency). Automated feedback mechanisms will not necessarily answer that need, even when deploying current developments in the automated identification of emotions. (Look to many reactions to early versions of Alexa, where swearing at the machine, or showing negativity to a wrong outcome could create a telephone or other communication seeking to find out what went wrong – certainly badly thought out (but perhaps ethically driven, certainly self interested, and using a form of AI type mechanism.)).
      The individuals time no longer becomes their own to do with as they please, because others use knowledge gained in ways which are beneficial for them, to freely appropriate the other individuals time in ways already mentioned, all of which serve, in a transactional world of instant communications, to further pressure/reduce the potential for individuals ordinarily to have private moments or much privacy for private reflection.

Join the conversation

Your email address will not be published. Required fields are marked *


Notify me of followup comments via e-mail. You can also subscribe without commenting.