Skip to content

Truthful Misinformation

written by Neil Levy and Keith Raymond Harris

There’s a lot of debate over the harms of misinformation today: whether it is more prevalent now than in the past, how often it misleads people, whether people act on misleading misinformation, and on whether the costs of content moderation or algorithmic depriorisation mightn’t be higher than the disease they aim to fight. This debate has focused, unsurprisingly, on false claims like “the 2020 election was stolen” or “Trump is a Russian agent”(misinformation, here, is more or less synonymous with fake news). In a new paper, we argue that misinformation can be truthful.

Misinformation is truthful if the events it reports or depicts really happened, but consuming it is likely to result in false beliefs.

One obvious way in which misinformation might (arguably) be truthful is that it might be taken out of context. Recently, for example, Donald Trump was widely (and correctly) reported as saying that the US was headed for a “bloodbath” if he wasn’t elected. The full context makes clear that he was predicting a metaphorical bloodbath; a bloodbath for the US auto industry. But this context often wasn’t provided, or was obscured (The Guardian provided the context in the 10th paragraph of its story;  the quotation is provided misleadingly in both the headline and the first paragraph). But we don’t have this sort of truthful misinformation in mind.

We don’t think it’s especially interesting to call attention to the phenomenon: taking quotes out of context is completely familiar, and its effects are well-known. Moreover, we think there’s a case for denying that this sort of misinformation is really truthful. Misinformation like this does not accurately depict what happened; context changes meaning. There’s an obvious difference between reporting “he said ‘I don’t believe Trump won the election fairly’ and reporting “he said ‘Trump won the election fairly’, even though both sentences accurately report the words said. Taking things out of context can be enough to make the information false.

Donald Trump: Threatening a bloodbath?

We have in mind a different sort of phenomenon, where providing more context won’t change the truth of what is depicted. Think of the videos that often go viral on X and other social media sites which portray people saying or doing things that the audience finds outrageous or contemptible. Right wing accounts repost TikTok videos of people who identify as left wing saying something anti-Semitic, for example, and left wing accounts repost cell phone footage of open racism toward Black people. While sometimes these videos are cut so that they might plausibly count as untruthful, they often accurately depict what was done or said. Nevertheless, we think that this sort of thing can constitute misinformation.

It is misinformation, we argue, because it predictably leads to false beliefs about a  group. People who identify with the left and are overtly and openly anti-Semitic are a minority of left wing people. Similarly, people who identify with the right and are overtly and openly racist are minority of right wing people. But videos that depict these minorities are far more likely to circulate widely than videos of people we might disagree with, but who are not obviously outrageous or contemptible. They’re attention grabbing in the way that more ordinary behavior isn’t.

The result, we suggest, is that though the content is accurate, its prevalence leads to false beliefs about other groups. We come to have a distorted sense of just how common these attitudes are among our opponents. As we write, Libs of TikTok features several posts about immigrants who have committed violent crimes. We have no reason to think these posts are inaccurate, but their relative prevalence in the feed of someone who follows groups like this is likely to be hugely disproportionate to the actual numbers (legal and illegal migrants to the United States appear to commit fewer violent crimes overall than the rest of the population).

It is debatable whether politically engaged people now hold views that are actually further apart than they used to be, but there’s good evidence that each side has more negative attitudes about the other than previously. Truthful misinformation may contribute to this affective polarization, by exposing us much more to worst people on each side and less to the average partisan.

How should we address the harms of truthful misinformation? We’re sceptical that censorship or even content moderation would be productive in these sorts of cases. We suggest it might be better to change social norms. We can each avoid reposting outrageous content by the other side (except when it is really relevant: police officers or government officials might appropriately be called out for such behavior) and discourage others on our side from engaging in this sort of activity. Truthful misinformation is entertaining, so it’s an uphill battle. But it’s worth the fight. 

Share on

2 Comment on this post

  1. Reading about “US heading to bloodbath” if some presidential candidate will not be elected reminds me the famous “Daisy Girl” video from the 60’s.
    This “Daisy Girl” clip was broadcasted only once as a part of Lyndon Johnson’s presidential campaign in 1964.
    After the broadcasting the clip people were really teriffied and there were a lot of phone calls of scared parents what was happening and what they should do.
    In this short black and white video the little girl counted down and at the end the nuclear bomb with the huge mushroom cloud exploded. The clip wanted to say that if Johnson is not be elected and his Republican opponent Barry Goldwater will be the winner there surely will be a nuclear war.
    So we can see that today’s social nets are just “amplifier” of these negative emotions and misinformation that are the immanent part of each political debate. A lot of books about “going dirty”, i. e. the negative campaigning, were written.
    Surely, fear is the most important human emotion. All the other emotions like love, hatred, compassion etc. are only derived.
    Therefore fear will be always misused both by politicians and media (e. g. covid). And as in the article it is concluded – the only defense is to explain not to ban. To ban means that gradually also the truth would be forbidden …

  2. Contextuality will always exist, but relativism is not widely accepted. Could that be an outcome of many routine human decision processes, where relativity becomes denied as a means of achieving a singular actionable decision? Has not politics always played with those different relationships, relating them to the context of particular worldviews and outcomes but denying, or totally ignoring any relativistic aspects which may not fully favour those views, could be of use later, or merely exhibit a real blindness. Even within academic debates, debating mechanisms exist which follow similar routes. e.g. reduc tio ad absurd um arguments often negate debates about contextually relevant material, sometimes leading to the disabling of useful and germane aspects to the benefit of particular interests. It appears to me this issue is allied to a lack of recognition of the difference between administering a dialogue and controlling the narrative. This blog could be accused of a type of censorship mentioned in this article when looking back at the outcome of the article “When Eating Meat is OK: A Defence of Benign Carnivorism”, because a response I posted to that article has become stuck in the system, probably because of a wrong answer to the initial authentication challenge when posting (due to a misreading), but I have interpreted that non-appearance of a second correctly authenticated response more as a glitch in the new processes/system rather than censorship or keyword filtering. Some consideration of that type of technical difficulty would need to be included within the communicative outcomes from this articles conclusions if unintended issues as damaging as those trying to be avoided are themselves to be avoided; and even then the differences between informing the dialogue and controlling the narrative need to be very clearly understood by everybody to assure the ongoing academic integrity of the blog.

    N.B. the additional spaces were to check if there is keyword filtering in the new site structure.

Comments are closed.