When Cupid fires arrows double-blind: implicit informed agreement for online research?
A while ago Facebook got into the news for experimenting on its subscribers, leading to a fair bit of grumbling. Now the dating site OKCupid has proudly outed itself: We Experiment On Human Beings! Unethical or not?
They point out that “…if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.” Besides subtle A/B testing, many sites are experimenting wildly trying new business models or ways of functioning. OKCupid reveals some cases where they did experiments to check whether their methods actually worked, and accidental experiments because of neat ideas that turned out to be not so useful, but gave interesting information (like ‘Love is blind day’, where profile photos were hidden and people could not judge based on appearance).
In the case of Facebook, the main concerns were that the subjects may not have given informed consent, and whether they had been (or could have been) harmed.
In the case of OKCupid the users may have had a different expectation from Facebook: Facebook is a social medium that is not sold as doing anything with or for you, but just as an environment. OKCupid clearly aims at helping customers find love: there is a deliberate aim in a certain direction, even though much of the online activity seems to be like a social network. Sure, there are privacy policies, but how many read or understand them? Most likely people assume sites behave in ways corresponding to their apparent uses (yes, a bad assumption). In the case of OKCupid users presumably assume magical algorithms are at work to find good matches. A bit of thought also suggests that these algorithms might need adjustment or experiments to be improved. So in a sense, users are likely to be aware that experiments are ongoing – even more if they read or are aware of their research blog.
The harm angle is more interesting. While Facebook affected the emotions slightly on people who might not have expected emotional manipulation, OKCupid is all about emotions and emotion-laden social interaction. People date because of the site. People have sex because of the site. People marry because of the site. Potentially manipulations could have far more far reaching consequences on OKCupid than on Facebook. Sure, Facebook also enables dating and marriage, but it is not the focus. If users expect or at least hope the dating site will have an effect on their love life, it also seems that they are implicitly assuming it could significantly affect them. To me, this suggests that again there is an implicit consent to not just being manipulated but even taking the risk of heartbreak.
In practice, OKCupid doesn’t seem to do very dangerous things. It mainly manipulates who sees who, how people rate each other, and sometimes publishes patterns in the data. While getting IRB approval in academia for having people score people’s appearance and personality with public scores (which certainly might affect self esteem) might have a few hurdles, it is still not too different from what already happens out in society.
The fact that something might be hard to do academically does not mean it is unacceptable in other domains: performance artists regularly risk life, limb and sanity for art in ways that would not fit any reasonable research ethics.
To sum up, the existing experimentation online certainly can cause ethical concerns. These mainly would show up if:
- There is a divergence between the perceived function/aim of the site and its actual activity. This makes the user assumptions mislead them in their implicit consent.
- The manipulations are not in line with the user function of the site. If OKCupid were manipulating people in order to affect racism (rather than investigate racial differences in dating) it would likely break the implicit agreement. (An interesting case is the urging to boycott a while ago; in this case it was not so much manipulation as an overt message).
- The manipulations deal with potentially risky domains, as compared to normal social activities.
- Scale might also be a factor: were Facebook or Google influence huge numbers we might demand stronger care in how the manipulations happen.
- Information gained from the manipulation is never made available to the people who participated or were affected. Sometimes we accept that companies we do business with gain an information advantage by having our information, but information beyond what we implicitly choose to reveal is giving an information advantage that we do not share in yet may bear costs from.
We all manipulate each other, whether as individuals, companies or groups. Manipulation is not in itself unethical. Manipulation for gathering information is not unethical either, as long as it is done carefully, the information is not misused and stakeholders also benefit.
3 Responses to When Cupid fires arrows double-blind: implicit informed agreement for online research?
- Anthony Drinkwater on Why I Am Not a Utilitarian
- Hazem Zohny on Cognitive enhancement, legalising opium, and cognitive biases
- Andrews on What Kind of Altruism is Most Effective?
- HT on Why I Am Not a Utilitarian
- HT on What Kind of Altruism is Most Effective?