Skip to content

Spin city: why improving collective epistemology matters

The gene for internet addiction has been found! Well, actually it turns out that 27% of internet addicts have the genetic variant, compared to 17% of non-addicts. The Encode project has overturned the theory of ‘junk DNA‘! Well, actually we already knew that that DNA was doing things long before, and the definition of ‘function’ used is iffy. Alzheimer’s disease is a new ‘type 3 diabetes‘! Except that no diabetes researchers believe it. Sensationalist reporting of science is everywhere, distorting public understanding of what science has discovered and its relative importance. If media ought to try to give a full picture of the situation, they seem to be failing.

But before we start blaming science journalists, maybe we should look sharply at the scientists. A new study shows that 47% of press releases about controlled trials contained spin, emphasizing the beneficial effect of the experimental treatment. This carried over to subsequent news stories, often copying the original spin. Maybe we could try blaming university press officers, but the study found spin in 41% of the abstracts of the papers too, typically overestimating the benefit of the intervention or downplaying risks. The only way of actually finding out the real story is to read the content of the paper, something requiring a bit of skill – and quite often paying for access.

Who to blame, and what to do about it?

Every reporting step between the original fact and the eventual recipient adds to the risk of bias or selection. It takes a somewhat exciting finding (or an eager researcher) to make it to a press release. Journalists select press releases that make good stories. Readers read news articles or blog posts that are interesting, and so on. The result is that having many intermediary steps easily biases reporting and hence understanding of what is going on.

That inevitable biasing effect is worsened if the original information is spun to be easily propagated: since scientists know roughly what people are interested in (new beneficial breakthroughs, that something commonly but not strongly believed is not true, that something commonly strongly believed is true, new risks, etc.) they will tilt in this direction, creating nonrandom bias that has nothing to do with the actual result.

As noted by Christie Wilcox, “Researchers, press officers and journalists all need to take responsibility for accurate and informative science communication.” But the problem is that they are just the paid members of that part of the epistemic process. In principle incentives could be arranged for researchers to report their findings more accurately (that would require funding bodies actually checking the output, perhaps giving bonuses or penalties depending on neutrality), having university funding similarly affected by third party assessment of the accuracy of their press coverage (imagine the freedom of speech or academic freedom issues of that!) and all media somehow being rewarded more for good journalism than for exciting content.

Bloggers are an interesting group in this process. While many blogs and other online media just act as propagators of stories, biasing them step by step, some do act as quality feedback. Criticisms against the hype of the Decode study were voiced first online by science bloggers, and in many other instances the blogosphere has proven quite good at noticing problems in claims. The trick is of course to listen to the right sources (or when they are right).

That cuts to the core of the problem. We need to develop better systems of trust and reputation when it comes to our information, otherwise our current rapid expansion of the media world will not give us better information but rather the reverse: an amplification of spin, hype and bias. If people get the wrong idea of what has been discovered, part of the value of projects such as Encode is lost. If people think that a doubtful diagnosis like internet addiction is more convincing since there is genetics involved, they will clamour for medicalizing and treating it. The good news is that there are many new possibilities for boosting trust in a distributed way: social media, online reputation management, paper or blog review boards giving stamps of approval, a “duty to respond” system among scientists, … who knows what else is being invented. But at the same time, we have an enormous institutional lag (few academic departments or funding bodies consider good blogging a factor, and the incentives for getting involved in something beyond current peer review are slight).

Methods of improving collective epistemology are important for the decisionmaking (and hence practical ethics) of society, and ought to not just be research topics but actively incentivized. In fact, it may be a far more pressing research goal than many currently fashionable research topics simply because it cuts across all research. It multiplies the value of the findings and makes it easier to focus on what really matters.

 

 

Share on

2 Comment on this post

  1. I completely agree on your final point: the first philosophy class I taught was about testimony because I thought that this was needed as a preliminary assessment of any other class, just in order to be able to consider with more awareness one’s sources (internet, newspapers, books, even teachers…). And I also agree on the need of team-work in order to avoid the cases you mention: peer-reviews are evidently not enough and one would hope for more interactions while initially writing scientific articles or while divulgating their results.

  2. Having more interactions during the development of (at least) the scientific reporting might be useful: the habit of circulating preprints online seem to have stimulated many areas, and many more eyes tend to scrutinize grand claims these days. We still need to figure out how to do this better. For example, ideally we should somehow measure the amount of readers who – after different levels of reading – dismiss a preprint as ‘this is nonsense’ and do something else: there is not just publication bias, but also commenting and reviewing bias.

Comments are closed.