Skip to content

What has Facebook (with some psychologists) done?

In my academic and musty corner of the universe, there has been a lot of talk in the past few days about this publication in the prestigious Proceedings of the National Academy of Sciences. Researchers tweaked a Facebook algorithm such that Facebook users would see a higher proportion of posts with negative or positive emotional content in their feed. They wanted to know whether a user seeing a different proportion would influence the emotional content content of that user’s posts in a positive or negative direction. The news: it did (a little bit).

People are less interested in that, however, and more interested in whether the researchers acted unethically. The BBC has a short round-up of some tweets here, and among other things the Guardian quotes Labour MP Jim Sheridan calling for an investigation here. Slate tagged its story on the issue with the headline ‘Facebook’s Unethical Experiment’ – a headline that shifts blame away from researchers and entirely to Facebook. There are many more news stories on this out by now: you get the picture.

One set of issues I won’t much discuss is one that a lot of my psychologist friends are talking about. How was this study given Institutional Review Board approval (IRB approval is usually required by any university for carrying out any study on human or animal subjects)? Shouldn’t any IRB worth its salt have flagged the ethics issues in this study and demanded more oversight? This issue is discussed at length here – it appears that IRB approval might not have even been required for this study, and it also appears unclear at present whether this study even had or attempted to get IRB approval. That issue aside, one thing a Facebook friend of mine notes – and I agree – is that many are overlooking the following important point: given that there was no impartial observation of how this study was carried out, judgments about who was harmed and how much they were harmed by this study were left to the researchers. That seems problematic.

Beyond issues to do with university-based ethics approval – approval that Facebook in any case does not need to carry out any kind of experiment with our Facebook feeds – the story raises a number of other ethical issues.

Have we given informed consent to this kind of thing merely my signing up to Facebook? There is some vague language Facebook throws at you about using your information for data analysis, but that hardly qualifies. I’ll be honest, though – I assume Facebook is doing much more malicious things with our information (the kinds of things they would have no motivation to publish). I still use Facebook though, primarily because it is the only viable way I have to keep in touch with a number of old friends. Does my paranoid assumption mean I have given tacit consent to this kind of thing?

Were we harmed by this experiment? Were we manipulated by it? The answer to both questions is yes, but here I think degrees matter. How badly were we harmed: badly enough that we should be upset? How badly were we manipulated? Badly enough that we should be upset? What is the threshold of harm beyond which Facebook, or any other research of this kind, should not cross? I have no idea. Honestly, I’m not bothered by this particular study. But that is just me: it is clear that many are very bothered by it.

With others, I am bothered by all the conceivable scenarios that involve Facebook using information about how to subtly manipulate users for its own purposes. As I have already said, Facebook does not need IRB approval, nor do they need to publish the things they learn through manipulating Facebook and analysing the data. It might be that people are not very sensitive to whatever manipulations Facebook can perform. The effect sizes in the study at issue were very small, after all. But it might be that people – at least some people – are very responsive to certain kinds of manipulations. That puts us in a difficult situation. It would be good to know how wide-scale social media can be used to influence behaviour, good to know how transformative that influence can be, and why. But to figure that out a lot more studies have to be done, and the results have to be made public knowledge. Who is going to give informed consent to that? And who is going to make Facebook or any other social media site play by those kinds of rules?

Share on

4 Comment on this post

  1. Informed consent is one side of the problem. The other is voluntariness: most research ethics assumes only volunteers are to be used, and that requires telling them they are in the experiment (at least afterwards).

    I am wrestling a bit with this right now, since I am involved in setting up an experiment that involves employees at a company: how voluntary is their participation, when they know the boss wants the experiment done? Also, I am working on a talk about enhancement ethics for a military audience. Many interesting papers have come out of that domain, but volunteering among soldiers is obviously even more problematic than employees. In the wired army of the present/near future Facebook-style big randomized controlled experiments could be done in a realistic setting – potentially very important information might be learned, but I have a suspicion that we are going to be way outside anything like academic notions of informed consent and voluntariness.

    I think getting a grip of harm is important, but big experiments will have a big harm distribution. If I mostly get sad updates in social media for a week, I will be a slight bit saddened. Somebody else might committ suicide. Most people likely have a modest response, but given 689,003 people and a US suicide rate of 12/100,000 we should expect 1.6 suicides in the test set over one week. I wonder what the actual suicide number was? If it was 2 cases, it is of course not statistical proof of anything, but the family of one of the victims might still have reason to blame Facebook and the researchers for experimenting on a vulnerable person.

  2. Thanks Anders. I agree these are difficult issues. I don’t have a good way to think about massive distribution of slight harms, and would welcome any pointers towards work on this. I also am unsure that we have a good grip on how informed consent is supposed to work in many of these contexts. If we construe both ‘informed’ and ‘consent’ in a way that requires explicit communication of information about what is being done in a given experiment — as is required by IRBs for most experiments — I doubt Facebook has it.

  3. The “consent” in the Facebook TOS doesn’t cover what the experiment did. The user agreement says Facebook can do research on the data you send. The experiment was intentionally distorting the data you RECEIVE, in an attempt to alter your mental state. Someone should be facing charges, and lawsuits.

  4. I think the reactions to this are a little overblown, assuming I understand rightly that people saw their friends’ genuine posts, but some were removed if they were too “happy” for one group, or too “sad” for another

    Research ethics was invented in response to some pretty major ethics failings, for which, rightly, there are procedures against them ever happening again. But those were things like testing drugs without consent, usually on particularly vulnerable groups.

    Whilst there are vulnerable people using facebook, the experiment simply artificially created a situation which may easily naturally arise. for example, around exam time, perhaps a teenager’s facebook might be full of downbeat posts or around the summer holidays particularly upbeat ones. Or a particular group of friends might all be affected by the same thing (let’s say One Direction broke up for example). Nor do we usually have any control over or particular expectation of what a friend posts or doesn’t post- we accept whatever they offer, though we can block them if it gets annoying or upsetting- as presumably the experiment participants could. So we have already accepted the risk that our friend’s emotions will be contagious and have some tools at our command to control them if they get out of hand.

    Sure, there are research ethics processes to be followed, and this may well turn out to be a failure of process. it might be important to follow up on all failures of process because only through those processes will we get protection against those experiments which are unethical. But that is different from saying this particular experiment is unethical.

    Indeed, if this experiment is unethical then given the results, does that mean it is unethical to post about a bad day? Or to post too often with a negative tone?

Comments are closed.