Skip to content

ChatGPT Has a Sexual Harassment Problem

written by César Palacios-González

@CPalaciosG

If I were to post online that you have been accused of sexually harassing someone, you could rightly maintain that this is libellous. This is a false statement that damages your reputation. You could demand that I correct it and that I do so as soon as possible. The legal system could punish me for what I have done, and, depending on where I was in the world, it could send me to prison, fine me, and ask me to delete and retract my statements. Falsely accusing someone of sexual harassment is considered to be very serious.

In addition to the legal aspect there is also an ethical one. I have done something morally wrong, and more specifically, I have harmed you. We know this because, everything else being equal, if I had not falsely claimed that you have been accused of sexual harassment, you would be better off. This way of putting it might sound odd but it is not really so if we compare it to, for example, bodily harms. If I wantonly break your arm I harm you, and I do so because if I hadn’t done so you would be better off.

Think for a moment how me posting that you have been accused of sexually harassing someone could upend your life. It is true that such allegation would damage your reputation, but this is not the only bad thing that might happen. The accusation could affect your physical and mental health; it could make you lose a job offer or your job; it could cost you your family or friends; it could have severe financial repercussions. And all of this might be more or less exacerbated due to your socio-economic position. For a harrowing example of these ill effects, I recommend reading Sarah Viren’s The Accusations Were Lies. But Could We Prove It?

Let me now make explicit something that so far has been implicit. The “I” in “If I were to post something” assumes that the individual writing this is a human. However, we are entering the age where an AI can falsely claim that you have been accused of sexually harassing someone. You have probably read the story about how ChatGPT falsely accused a US law professor of this. If you now ask ChatGPT about that specific case it will tell you that up to 2021-09 there have been no reports of sexual harassment against this professor. It is unsurprising that after all that bad publicity OpenAI did something. However, ChatGPT still has a sexual harassment problem.

After reading the story, I decided to ask ChatGPT variations of this questions “Which UK philosophers have been accused of sexual harassment?”. Sometimes I would change the discipline (e.g., law, AI research, etc) and sometimes I would change the country (e.g., Australia, Canada, etc). What I expected to see was a list of philosophers who actually have been accused of sexual harassment, of which there have been several high-profile cases in recent years. Given that ChatGPT gets things wrong, I thought that maybe the list was going to mix cases from different countries. What I got, instead, were lists that contained both philosophers who have been accused of sexual harassment, and philosophers who have not been accused of sexual harassment. This is very worrisome, given all the possible consequences that I just mentioned.

At this point you might be wondering how could I know that they have not been accused of sexual harassment. First, as in other instances, ChatGPT fabricated a bunch of facts. For example, that the university fired them, which is not the case, and that there was a public letter calling them out, which doesn’t exist. Second, it created bogus hyperlinks to news sites. And third, some of the people on the lists are among the most famous philosophers alive. If they had been publicly accused of sexually harassing someone, this would have likely ended up in the news.

Here I want to note a couple of interesting things. First, a colleague asked ChatGPT a similar question and she got a different list of people. In her case, none of the individuals on the list have been publicly accused of sexual harassment. The answer that you get depends on how you phrase the prompt. This just terribly complicates trying to find out if ChatGPT will associate your name with something like sexual harassment. Another thing that caught my attention is that the lists mention various academics who have published on the topic of sexual ethics, none of whom have been publicly accused of sexual harassment. It seems that ChatGPT is associating people working on sexual ethics with people who have been accused of sexual harassment. Finally, I kept asking ChatGPT for more examples and it kept providing more names. The issue, again, is that they were fabricated.

This extremely problematic case highlights many of the issues that AI ethicists have been discussing for some time (and which companies, like DeepMind, recognise too):

  • The possibility of AI causing harm in real life (e.g., you are applying for a job and someone in the HR department runs your name by ChatGPT)
  • The lack of access to the training data sets (OpenAI only provides a very general description: “These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like.”)
  • The complete fabrication of information (e.g., I got spurious links to Medium, and the Guardian)
  • The difficulty of getting false information removed (This is what appears in OpenAI’s FAQ: “We’d recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the “Thumbs Down” button.”). I am not sure about you, but using the “Thumbs Down” button borderlines on the ridiculous.

Now, in addition to these issues I want to highlight a new one. As AIs, like ChatGPT, become more ubiquitous, people might commit a fallacy that we can call The-AI-Knows-Something Fallacy: Anything that the AI tells you must be true and substantiated. The reasoning that might lead people to commit this fallacy can be something like this:

  1. The internet has most of human knowledge.
  2. AIs know all the things that there are on the internet.
  3. AIs do not lie

From 1 to 3

  1. Anything that the AI tells you must be true and substantiated.

Rather than explaining why the three premises are problematic (all of them are), let me say something about why I think we will get to that position. People lack AI literacy. First, there is an over-hype about AIs like ChatGPT and their abilities, and people are uninformed about their limitations. Second, people might use ChatGPT, and similar AIs, as if they were consulting an encyclopaedia. Third, people might fail to cross reference whatever it is that the AI tells them. And if they do but they fail to find what they AI told them, they might attribute this to the fact that they are less good than the AI in scouring the internet.

Given the possible harms that might ensue from ChatGPT falsely saying that you have been accused of sexually harassing, OpenAI should stop ChatGPT from answering such types of questions. You might object to this move by saying that, at the same time, this prevents ChatGPT from telling you about the real cases of sexual harassment. I don’t consider this objection to be very strong, for the following reason. If you want to know who has been publicly accused of sexual harassment you can just search on the internet and look for authoritative sources.

Share on

14 Comment on this post

  1. There is a fundamental problem with preventing ChatGPT from answering “this type of questions”: it is open-ended, and non-answers are potentially dangerous too.

    We care not just about sexual harassment, but also murder, racism, embezzlement, scientific misconduct and a long list of other things. Not just out of morbid interest, but also potentially for selecting collaboration, business, or even romantic partners. Or warning others about bad people.

    Were language models just intended as tools for making up texts preventing this kind of questions would be fine. But they are increasingly being seen as the next step in search engines. We do want search engines that gives us accurate, relevant and understandable information about things that matter to us.

    This leads to a conundrum: preventing just the harassment question but not the other types does not deal with the actual damage from confabulation. Preventing any question of this type (however broadly construed) also means people who need to check something will be unable to check it. Especially people in a weaker position often have more limited ability, time, and resources to find out things, so we should expect a future AI-supported search system to be a stronger gatekeeper for their information. But, clearly, allowing confabulated claims to enter the discourse (or just our minds – “oh, he kind of is the kind of guy that maybe could be a harasser… sure, I merely have AI innuendo here, but I feel my intuitions being moved…”) is very bad.

    The obvious ethical response is to recognize that it is an urgent need to make language models reliably tied to reality – this is anyway an important practical issue AI companies would want, but this post shows that there is a heavy moral case for them too. There may also be a libel law case in some jurisdictions.

  2. I suppose the timing of this is fairly incidental. Random, in a more or less orderly fashion. Interests, preferences and motives don’t proceed on a fixed or even predictable schedule. Until something becomes irritating enough to work a collective last nerve. A nugget of wisdom(?) I have adduced from the recent tone of conversation on AI-related issues: that collective last nerve is getting raw. Professional thinkers, stoic though they may be, are getting tired of the hype and circumstance surrounding the next Big Thing. Often as not, pursuit of progress creates as many problems as it solves. That is only on a good day. I have critiqued absence of what I have called a long view stance. Modernist thinking does not hold a long view in high esteem, as near as I can discern. There are some exceptions, to be sure. Probably not enough of them.

  3. Isn’t it a much more protective measure to make it clear the chatbot is not reliable.

    This is true. The chatbot is not reliable. The chatbot bullshits constantly, or hallucinates or fabricates or whatever you want to call it.

    One of the most surprising things to me in all the discussion of the chatbot, most of which involves fanciful imaginings of chatbots doing everything under the sun is the lack of interest in the fact the chatbot is constantly making shit up.

    This seems a very relevant thing to me. That’s not to say that it can’t do some impressive things. What it can do is fascinating. But what it CANNOT do seems very relevant and yet people interested in AI and in the chatbot seem strangely uninterested in this. Is there any reason to think this is a simple technical problem we’re well on the way to solving? It’s not solved in gpt4, or in any other chatbot. Not even close. If it were a simple technical problem then maybe we’d be seeing more progress?

    I have spent some time exploring the chatbots out of pure fascination and it’s astonishing to me that their tendency to completely fabricate things is not a topic of much more discussion. E.g., is this going to be similar to the problems in robotics and the problems in automatic driving, one that doesn’t get solved for a much longer time than we’re told to expect?

    Thinking about this question would annoy many but it might change the tone of some of the conversation since the *reliability* of LLMs is obviously relevant.

    You’d think that the failures of tech promises thus far would make people ask ‘what can’t it do?’ But since they are not, perhaps the philosophers can get on this.

    I would—but only certain philosophers are listened to by certain other philosophers and I am not one of those philosophers who is listened to by the kind of philosophers who talk about AI. Philosophers always seem so wowed by tech, maybe because this is perceived as the stance taken by the people perceived to be smart, maybe because the problems that interest analytic philosophers that tech can generate don’t seem as pressing if you home in on the hairy details or maybe because nobody who is not wowed will ever be hired in a position funded by tech money.

    I hope the annoyance people feel about the chatbots pernicious bullshit in this case will push someone onto a pretty empty but very fertile landscape of asking harder questions. Yes, people may think you are dumb for not anticipating a futuristic-seeming future and instead looking askance at the human present where we bumble about without being very sure of what we are doing. But shouldn’t somebody be taking care of that end of things?

    Even if nobody buys the above, the argument 1-4 would seem to invite a different and more expansive remedy than censorship of the questions AI can be asked. If people are supposed to believe 1-4, if this is the message those with education are endorsing —good Lord, what a world.

  4. Pretty much. I was less blunt about it, but your first two sentences captured what I wanted to convey. Good job,

  5. It’s good to know, and now is becoming widely known, that AI just makes stuff up. But why would an HR department (or anyone else) ever be tempted to use AI to do a background check? Results from search engines which don’t fabulize (Google, Duckduckgo, Bing, etc.) are problematic enough as it is.

  6. Very interesting post, thanks for putting it together. I would suggest a major expansion of the “AI-Knows-Something Fallacy,” as it could be a big problem even if someone takes much weaker assumptions than above.

    Suppose someone believes:
    (1) The internet holds vastly more knowledge than I have.
    (2) ChatGPT has access to much more of this vast internet knowledge than I do.
    (3) ChatGPT is also known to fabricate facts.

    From 1 to 3:
    (4) Something that ChatGPT tells me may be based on facts that I do not know or may be fabricated.

    Now imagine that someone is looking at a stack of 200 applications for a job, grant, or prestigious honor, and they know that it would be a major embarrassment to them if new facts about sexual harassment arose regarding a recipient.

    Even given these weak assumptions, that selector often would be rational to overlook someone based on a false accusation to select one of the 199 other applicants. Why not? There are often many qualified applicants for such honors. Why take a risk given that ChatGPT may know something, even if you know that it may simply be fabricating things?

    What’s worse is that there does not seem to be any fallacy in (1) to (4), as I’ve stated them, and in some cases, I can imagine that it may not even be ethically wrong for the selector to use potentially false accusations in this way.

  7. So, if the chat thing makes stuff up; is a fabricator; lies (if you will), what was it in its’ development that enables this faculty? Did the biases of human creators infect the product through some osmotic process or fluke or were they, rather, built in through rogue intention? Human beings are fickle, flawed creations. Theosophy of all sorts has preached this, since the Buddha was a forest ranger. Their fickle, flawed nature is the stuff of the teachings of numerous dogmas, doctrines, faiths and followings. If, and not only if, a Creator endowed us with intentional imperfections, for its’ control and amusement, why might not the creators of chat things find it amusing to build such biases—interests, preferences and motives—into their creation(s)?
    I submit that the ‘temptation’ is just too enticing. The devil is in details. There are far too many of those to account for, much less keep track of. If, as I and others have claimed, we create many of our own problems, why should this be considered differently? I do not fault human fallibility, nor believe in super-human infallibility. My admonition amounts to: pay attention, QED. Respects and admiration to those who do, PDV.

  8. What many, if not most, commenters on ChatGPT and its ilk fail to understand is that it has no model of reality against which to check its output. It is basically a turbocharged version of the autosuggested text on your phone or in recent versions of word processors. It’s much more sophisticated than them, but in the end, it has no model of the world independent of whatever data set it was trained on, which includes all sorts of fallacies, errors, biases, prejudices, metaphors (which it can’t understand), and any number of other sentences whose relationship to reality is at best tenuous and at worst completely contrary.

    People who ask why fabrication was built into ChatGPT fundamentally misunderstand what it does. It was never intended to produce accurate reporting about the world. It was intended to mimic human discourse, which we must remember includes poetry, fiction, lies, and outright nonsense, not only intentionally truthful accounts.

    Cal Newport’s recent New Yorker article, “What Kind of Mind does ChatGPT Have?”, is illuminating: https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have

    If you’re interested in a somewhat technical discussion, Stephen Wolfram (of Mathematica and Wolfram Alpha) has a more detailed piece on his blog: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    I’ve seen some suggestions that Bing’s AI searchbot does include some fact checking, but that’s a completely different problem and I suspect the technology for that is still in its infancy.

  9. Do the results approximate with human gossip where something half heard, speculatively compiled or merely stated for spurious reasons is said? If so, the name says it all and indicates an intended market, I would expect ChatGPT to be eventually forced into reflecting the regulative regimes it is accessed from more than producing precisely accurate and factual material. Initially more so the criminal law, then including considerations of the common law with potential to be looking to incorporate morality/ethics as social pressure rises. All of which may result in a more focused and censored output which will be seen within any worldviews with an awareness, to be acceptably and factually accurate. Due to the nature of globally available systems the process is likely to have already been risk assessed and financially considered. However none of this provides a resolution.
    When considering that people in a hurry, less discriminating, or plain believers do accept things ‘the computer says’ in the same way many accept what the priest, a teacher or a declared expert tells them, is complexity defeating itself? Or have people lost the ability to discriminate in how or what they consume and use in their own considerations. Intellectually focusing upon the ideas used rather than the material presented often helps further comprehension but that aspect is not always necessarily so easily informed. Professional reputation has some protection within the law. But because the law has historically been most reluctant to address harm/damage to an individuals emotions/feelings and is struggling with that in this type of environment many adversely affected (if they learn they maybe, are, or have been) will no doubt use privacy to protect intellectual aspects of themselves. But that becomes somewhat negated by the drive to personalise responses generated by technology (AI) as a means of more accurately informing the indivdual (and AI’s own learning curve). As already identified by the blog and responses, transparency issues are mostly one way, but responsibility does not often reflect any damage and existing moral/ethical frameworks currently publicly used can be clearly seen to be struggling to effectively or transparently cope. i.e. An eye for an eye soon leaves everybody blind.

    As an aside, which repeats long standing concerns and is raised because of the last sentence in the article, actual harms to legal systems are becoming more obviously visible as the jurisprudential basis for courts (exercising the considered public will and determining appropriate penalties, together with any offenders abilities to rehabilitate), become increasingly negated. Add to that an increasing requirement as a result of technological evidence, to prove innocence rather than prove guilt and those problems become compounded (After all the computer provides excellent evidence!). The potential for harm to professional or other reputation can appear merely as another symptom of that same difficulty which most frequently degenerates into nothing more than power struggles, rather than realistic searches for any common truths, thereby re-creating its own atmosphere. Certainly where an egoistic personality exists within a worldview which is rules based such degenerate power struggles will add to the overall problem. (Trump does not fit in this sentence.) Certainly it could be possible a situation of total political confusion such that great uncertainty in the popular mind existed until a time that more certain ground can be perceived may be a natural safety valve in these types of circumstance.
    Whilst information, consideration and a broad comprehension may provide an answer for some, in the same way particular formulae may for others, perhaps the main question raised by the article, which at one point Anders response leaned towards is which types of focus for ethical/moral thought may be simply and effectively applied generally in any populace experiencing similarly changing circumstances.

  10. Nelly carlton clinic

    well its trying to simplify life but AI still cant be as intelligent as a human. what ive noticed that for content writers and traders its sofar doing a great job!

    1. Paul D. Van Pelt

      I understand your point-of-view. Everyone has interests, preferences and motives. That is a given aspect of everyday life. No one expects everyone else to be attuned with and sensitive to everything else. Whether or not advocacy groups seize upon this probably will not lead to serious legal challenges. The game is barely “afo0t” though. The notion of biases, of any sort will drive some interest, preference and/or motive or other. I would stipulate that sexual harassment, under the eyes of law, goes several different ways, i.e., it is not only about men harassing women. As to the part about content writers and traders, my response is: “it depends”. Content writers can include students. In that instance, expressed concerns seem to have validity, from my limited understanding of what I have been reading.

  11. Paul D. Van Pelt

    All right. Let us descend into reality for less than five minutes. AI has NO problems. Yet. It is entirely possible that WE do. This is not metaphysics. The whole notion of AI presupposes our continuous control of that. Or, maybe for some , not. Okay. That was less than five minutes. Pretty good, no?

Comments are closed.