Skip to content

Caution with chatbots? Generative AI in healthcare

  • by

Written by MSt in Practical Ethics student Dr Jeremy Gauntlett-Gilbert

Human beings, as a species, love to tell stories and to imagine that there are person-like agents behind events. The Ancient Greeks saw the rivers and the winds as personalised deities, placating them if they appeared ‘angry’. Psychologists  in classic 1940s experiments were impressed at how participants could generate complex narratives about animations of small abstract shapes simply bumping into each other.

This human tendency collided with technology in the 1960s. Joseph Weizenbaum, a computer scientist, wrote software that would respond to a user’s text input with counselling-type responses. For example, the computer would “reply” with phrases like “I’m sorry to hear you’re depressed”, or reframe the person’s statement as a question. To his astonishment, people readily had strong emotional responses to the computer and began to feel it understood them. This has been referred to as the ELIZA effect, or the tendency for people to ascribe abilities, and qualities, to software, that it can’t possibly have.

The next great collision has been with large language models such as ChatGPT, and AI chatbot ‘friends’ such as Replika. These applications can rapidly ‘interact’ in a way that is personalised to the user, potentially forming significant bonds. Notoriously, a Replika chatbot gave consistent encouraging responses to a young man who had decided to kill the Queen of England (he was caught in the grounds of Windsor Palace).

Chatbots can offer on-tap, personalised, informed responses to user requests. In healthcare, they have been used to help people with health behaviour changes, such as exercise or losing weight. They have offered a mixture of information and support to people with cancer and are even now delivering psychological therapy. Thus, they can be placed in the role of caring for people, in the way that human clinicians would usually care for people.

Early research results are very encouraging – patients can form ‘human level’ therapeutic relationships with their therapy bots and have rated chatbot responses around cancer care as more empathic than those of clinicians. The potential is obvious; in most healthcare systems, trained therapists are in short supply and cancer nurses usually can’t respond to questions at two o’clock in the morning. Tech companies and healthcare commissioners are quick to see the benefits. However, are there reasons to be cautious? Beyond the usual ethical concerns about generative AI, such as bias in training data and ascription of responsibility, are there any specific reasons to be concerned about using chatbots with vulnerable populations, even if people like them?

The ELIZA effect is powerfully active here. Research subjects have given chatbots very high scores for empathy; however, in reality we must realise that the chatbot is no more empathic than a fruit fly. The therapy bot ‘Woebot’ makes the promise that “I am here for you, 24/7.” But of course, Woebot isn’t here for you, and doesn’t care. It would be reasonable instead to say “this software will always generate personalised text that gives the strong impression of empathy, 24/7”. In philosophical terms, chatbots can be reliable, and probably are; however, they can’t be trustworthy, as this is a specific virtue of an agent, a virtue that software doesn’t currently have.  And they certainly can be neither empathic nor compassionate.

This raises the question – how do we think about the idea of consent where we know that people – vulnerable people – will probably begin to think that they are being supported and understood by something that does neither? The idea of valid consent is central to health care, particularly in consent to research or procedures such as surgery. Should chatbots be exempt from consenting procedures, particularly when we know that they target such a weak point in human psychology?  

In healthcare, most people can agree that there should be no deception when it comes to consenting to procedures or treatments. Concepts of valid consent usually include the ideas of rationality, and authenticity. That is, we should be able to think clearly about an intervention, and our response should be aligned with our deeper desires and judgements. How would our ‘most authentic self’ feel if, having spilled its guts about a hugely sensitive personal issue, it realised that it had done this to a device that had no more empathy than a bacterium? Surely, rational, autonomous and authentic consent would require every user of a ‘caring’ chatbot to be clearly informed that there will be no true care or empathy in the interaction, ever, even if we come to feel that there is. It is unlikely that tech companies will relish this kind of consenting for their products, however truthful it might be.

Perhaps this is over-stating things. After all, we willingly ‘suspend disbelief’ when we go to a movie or read fiction, and sometimes doctors will effectively use ‘open label placebos’ where both doctor and patient know it’s a sugar pill. Surely feeling warmly towards your chatbot is no worse? Unfortunately, these comparisons don’t appreciate the exact nature of the technology. Neither books nor films give on-tap, personalised, precise responses to our questions, which is why no-one has ever got the impression that their sci-fi novel ‘really understands and cares about them’.

Just as patients may not realise that empathic text responses don’t reflect empathy, as a society we may be too slow to realise that ‘caring’ chatbots do not deliver care. There may be great benefit in receiving accurate, personalised information in the middle of the night, which is phrased in a gentle and warm way. But it isn’t care. There is a strong philosophical tradition of Care Ethics, partly from a feminist tradition, that argues that societally we have devalued the practice and values of care, relegating it to unpaid ‘women’s work’. The Care Ethicists may have the best account of what risks being lost in the implementation of AI.

A chatbot can helpfully extend the availability of health care, being available 24/7. It might free clinician time by taking on admin-type tasks, releasing time for patient-clinician interaction. However, in practice it will often be used to replace human interaction, and ethical thinking requires us to examine this carefully in each implementation. For example, a person might be given a ‘choice’ between trialling a chatbot and seeing a human therapist. However, the reality of the ‘choice’ may be (1) a chatbot now, or (2) a human therapist in 4 months. Is this truly an autonomous ‘choice’, for a vulnerable, desperate person? A commissioner might argue that it is great to offer a choice to someone on a long waiting list. Perhaps it is – but are the funds spent on the chatbot are coming out of the general budget (including staffing), and making the wait for a human even longer?

Thus, chatbots might ‘work’ and be welcomed by patients, and still raise ethical issues. It is helpful to recall that platforms such as Google and Facebook were hugely effective and very popular, yet we are still dealing with the ethical problems that they pose. We can welcome the new class of weight loss drugs, such as Ozempic, without naively having to believe that they are an unmixed blessing. We should bring the same thoughtfulness to AI systems that are put in the position of care, but may displace people who can actually deliver it.  

Share on

Join the conversation

Your email address will not be published. Required fields are marked *


Notify me of followup comments via e-mail. You can also subscribe without commenting.