New York Times

1 in 4 women: How the latest sexual assault statistics were turned into click bait by the New York Times

by Brian D. Earp / (@briandavidearp)

* Note: this article was originally published at the Huffington Post.

Introduction

As someone who has worked on college campuses to educate men and women about sexual assault and consent, I have seen the barriers to raising awareness and changing attitudes. Chief among them, in my experience, is a sense of skepticism–especially among college-aged men–that sexual assault is even all that dire of a problem to begin with.

“1 in 4? 1 in 5? Come on, it can’t be that high. That’s just feminist propaganda!”

A lot of the statistics that get thrown around in this area (they seem to think) have more to do with politics and ideology than with careful, dispassionate science. So they often wave away the issue of sexual assault–and won’t engage on issues like affirmative consent.

In my view, these are the men we really need to reach.

A new statistic

So enter the headline from last week’s New York Times coverage of the latest college campus sexual assault survey:

1 in 4 Women Experience Sex Assault on Campus.”

But that’s not what the survey showed. And you don’t have to read all 288 pages of the published report to figure this out (although I did that today just to be sure). The executive summary is all you need.

Continue reading

Psychology is not in crisis? Depends on what you mean by “crisis”

By Brian D. Earp
@briandavidearp

*Note that this article was originally published at the Huffington Post.

Introduction

In the New York Times yesterday, psychologist Lisa Feldman Barrett argues that “Psychology is Not in Crisis.” She is responding to the results of a large-scale initiative called the Reproducibility Project, published in Science magazine, which appeared to show that the findings from over 60 percent of a sample of 100 psychology studies did not hold up when independent labs attempted to replicate them.

She argues that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.” To illustrate this point, she gives us the following scenario:

Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.

Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.

She’s making a pretty big assumption here, which is that the studies we’re interested in are “well-designed” and “carefully run.” But a major reason for the so-called “crisis” in psychology — and I’ll come back to the question of just what kind of crisis we’re really talking about (see my title) — is the fact that a very large number of not-well-designed, and not-carefully-run studies have been making it through peer review for decades.

Small sample sizes, sketchy statistical procedures, incomplete reporting of experiments, and so on, have been pretty convincingly shown to be widespread in the field of psychology (and in other fields as well), leading to the publication of a resource-wastingly large percentage of “false positives” (read: statistical noise that happens to look like a real result) in the literature.

Continue reading

Authors

Subscribe Via Email

Affiliations