written by Neil Levy
Here’s the common view of social media and its epistemic effects. Social media leads to people sequestering themselves in echo chambers, and echo chambers cause extreme and/or unjustified beliefs. When we don’t exchange opinions with a variety of people, we don’t have access to the full range of evidence and argument. Instead, because echo chambers form around already likeminded people, they lead to the entrenchment of initial views, no matter how good or bad they might have been to begin with.
I’ve argued that many of the criticisms of echo chambers are off base. They don’t undermine the rationality of the views we form. Nor is social media responsible for their formation, I’ve suggested – dispositions to reject some views and accept others in virtue of their source are built into being epistemically social animals. Still, there’s obviously something to the idea that being exposed to a certain range of opinions has effects on our beliefs – the range of opinions to which we’re exposed makes a different to how well justified our beliefs are.
In this post, I want to consider the opposite phenomenon: how exposure to a diversity of opinions might undermine knowledge.
With echo chambers, my belief about some politically charged topic might be false or badly justified because I haven’t considered alternatives, purely because I don’t have access to other views or because I don’t take them seriously. This kind of dynamic might characterise Facebook, where we may choose to interact only with our ‘friends.’ But Twitter has a more open structure: anyone can comment on your tweets. And that can undermine knowledge at least as effectively.
An effective way to see this in action is to click on the Twitter trending topics when they’re political. Take any topic on which you have a opinion beforehand – the efficacy of vaccines, say. Clicking on a hashtag like #vaccinemandates leads to a series of tweets, some of which express or support for a position contrary to one’s own. Some of them cite (apparent) evidence. To take a real example, I recently clicked on a trending hashtag about vaccines and was presented with dozens of tweets reporting (alleged) cases of vaccine injuries after one of the Covid-19 vaccines. Of course, adverse events caused by vaccines really occur, and some of them are serious. But the tweets were alleging (and linking to footage of) multiple cases of unreported serious adverse events. They were alleging cover ups and systematic underreporting. Now, I suspect that the evidence is in fact compatible with my pre-existing opinion that the vaccines are safe and well tolerated. Nevertheless, reading them threatens my knowledge.
I suspect that the cases are a mishmash of genuine cases of serious adverse events and spurious cases, which mashed together create an appearance of an epidemic of serious side effects. They may do this by multiple repetitions of genuine cases, presented in such a way that it’s not obvious that they’re the same few cases again and again, as well as by misattributing genuine injuries to the vaccine, whether by creating a causal link when there is none, or simply fabricating a back story. That’s common: it’s a staple of fake news to present a real photo or video and claim it represents the effects of Biden’s policies, or Muslims rioting in Europe, or whatever, when the photo or video actually depicts an unrelated event. The appearance of an epidemic could even be created by presenting multiple real cases of adverse events: they may be very rare, but with so many millions vaccinated, the absolute number is large enough to fill the screen multiple times over.
So when I click on the hashtag, I am presented with a great deal of evidence that is apparently inconsistent with my belief. It’s hard to hang on to the idea that the vaccines are safe in the face of all that evidence. I could start checking the tweets. But that’s hard to do. Probably, I could debunk some of them quite quickly. I might find a fact checking site has already discussed some individual cases, and found they’re unrelated or fabrications. But many would be hard to verify or debunk, either because not enough information is provided to check or because the only sources repeat the Covid-19 vaccine claim but are not authoritative enough for me to trust (but also not obviously unreliable either). It would also take me more time than I can feasibly spend on the project to fact check even a representative sample of the tweets.
The result is that I now possess a large number of what epistemologists call defeaters for my belief. A defeater for a belief is (roughly) a prima facie reason to think that the belief is true. Fact checking the evidence might be thought of a search for defeaters for the defeaters. Because I’m not able to fact check enough of them, I have undefeated defeaters for my beliefs. When we have undefeated defeaters for our beliefs, the degree to which they’re justified falls. If these beliefs constituted knowledge previously, they may no longer do so.
Of course I could have avoided encountering these tweets. I clicked on the hashtag myself. But Twitter’s open structure means that a less dramatic version of this sort of thing happens frequently. I may accumulate defeaters for my beliefs just in the course of using it even without seeking diverse sources.
Echo chambers may threaten knowledge by limiting the diversity of opinions we’re exposed to. Open textured networks may threaten knowledge in the opposite way: by exposing us to a diversity of opinions and evidence.
So how do we preserve knowledge? Is there some middle path: do we need to be exposed to just enough variety in opinions (perhaps so we can fact check them for ourselves)? I won’t try to answer this question. I limit myself just to suggesting that echo chambers may not be the problem they’re made out to be: knowledge can be threatened because we’re not in such a chamber. Nor, I think, is the problem social media: both phenomena can and do arise in other ways. Knowledge can be fragile and it doesn’t take much to threaten it.
Zygmunt Bauman in “Modernity and Holocaust” or Pierre Bourdie in “On Television” considered the question how the technological progress influences the politics. Alson some political scientists (e.g. J. J. Linz, H. Arendt) concludes that totalitarian
or authoritarian regimes are inherently connected with the modernity and the modern inventions (also the radio broadcasting helped the nacism to seize the power in Germany).
Ulrich Beck as for the modernity talks about reflexive (risk) society that just reacts on its own characteristics.
That is why the social media with no doubts influence the politics in massive way.
Therefore our reaction on the recent pandemic tells nothing about the virus. But it tells a lot about our society and its risky factors, social media included.
Comments are closed.