By Charles Foster
When you click ‘Like’ on Facebook, you’re giving away a lot more than you might think. Your ‘Likes’ can be assembled by an algorithm into a terrifyingly accurate portrait.
Here are the chances of an accurate prediction: Single v in a relationship: 67%; Parents still together when you were 21: 60%; Cigarette smoking: 73%; Alcohol drinking: 70%; Drug-using: 65%; Caucasian v African American: 95%; Christianity v Islam: 82%; Democrat v Republican: 85%; Male homosexuality: 88%; Female homosexuality: 75%; Gender: 93%.
Read the (very accessible) paper in full for details of the methodology. For present purposes two further and related observations will do. First: most of the prediction happened by the connection of pieces of information which, by themselves, would not have been particularly enlightening. And, second, ‘few users were associated with Likes explicitly revealing their attributes.’ Thus, for instance, fewer than 5% of gay users were connected with explicitly gay groups. This second observation might be particularly ethically significant: it might mean that, despite our willingness to disclose a lot to the world, there are some things about ourselves that we would rather not have known.
The sort of conclusions generated by this algorithm are of course likely to be of great interest to many, including the security services (they can now target their drug searches more effectively), insurance companies (have you lied on your application form about your smoking?), homophobic bigots (‘he’s been lying to us all along: let’s get him’), politicians (‘We know there are Republicans in that house: make sure they get dragged out to vote’), and retailers. The most benign-sounding of these, retailers, aren’t necessarily benign at all. The authors of the paper discuss the use of shopping records by a US retail network to diagnose the pregnancies of its female customers. The customers thought to be pregnant were then sent targeted offers. Those offers might be welcome, but they might also do great damage. They might, say the authors, reveal (or incorrectly suggest) the pregnancy of an unmarried mother in a culture where such a pregnancy is unacceptable. Or, I suggest, cause great pain when a woman has just lost a much wanted child.
Is it unethical to use such an algorithm? It might be said (and Hannah Maslen indeeds argues powerfully for this position here), that if you put lots of little bits of information about you in the public domain, you can hardly complain if they are merely stuck together to form a more complete picture.
Surely it turns on consent. If it is universally understood that your ‘Likes’ can and will be used this way, there can indeed be no cause for complaint. But most will not understand that.
If I invite you into my hall and kitchen, I’m not inviting you to look into my bedroom. If, unknown to me, you have x-ray glasses which you use to look up from the hall into my bedroom, I’ve been violated.
The low incidence of gay people ‘liking’ explicitly gay groups surely gives the lie to the suggestion that, by ‘liking’ anything, we’re impliedly giving the algorithm permission to rummage, unrestricted, through our lives.
AFAICR I haven’t “liked” any explicitly LGBT groups, but I’m entirely out; there may be lots of others like me.
I am intrigued, Charles by your last argument : «The low incidence of gay people ‘liking’ explicitly gay groups surely gives the lie to the suggestion that, by ‘liking’ anything, we’re implicitly giving the algorithm permission to rummage, unrestricted, through our lives.»
If I understand this correctly, the argument goes :
The majority of gays do not publicly state that they like explicitly gay groups, therefore it is clear that they want to maintain their homosexuality private and therefore also refuse the right of anyone to infer their sexual orientation from other potential clues revealed by their likes.
Your argument, it seems to me, only works if you assume
1.that most gays must «really» like explicitly gay groups,
2.that if they don’t do so publicly it must be due to the fact that they want to hide their gayness, by not giving such an obvious clue to their sexual orientation, and
3.that it follows that they refuse the possibility for others to deduce anything about their personalities from what they do place on public view.
In which case, a couple of questions :
1. Why should gays like explicitly gay groups any more than any other group or association? (It is rather like saying that gay singers prefer to join a gay choir rather than one that makes no claims about the gayness or straightness or kinkiness or whatever of its members. Some clearly do, but I doubt they form a majority)
2. Why should this lack of public demonstration of «liking» for gay groups imply anything at all?
3. Is all inference drawn from specific behaviours, statements and preferences of individuals immoral?
We all give obvious and not-so-obvious clues about ourselves all the time, consciously and unconsciously. Indeed, it can be argued that unless we did so, human interaction would be unbearably cumbersome. (Imagine that every time you met another person you had to explicitly define your roles and relations one to the other…)
Facebook clearly makes the world more complicated by its collosal span, and the fact that this facilitates the use of algorithms to automate the inferences that we all could make from scraps of disparate and banal information.
If we care about the presentation of ourselves in everyday life we should take care about what we reveal. This seems to be a very reasonable precaution to take, but I agree with Hannah – I don’t see that this is a moral issue. (And I think she gives some strong arguments for her point of view.)
If you were consistent, I suspect you should also criticise the authors of the paper for revealing that such algorithms give pretty accurate results for a certain number of traits, thus encouraging their development and use…
PS : as you know, the case of the retailer and the probably pregnant person has nothing whatsoever to do with Facebook or the internet.
Anthony. Many thanks. Your summary of the argument is spot on.
As to your questions:
1. I and the authors of the paper make an assumption that gay people feel an intrinsic solidarity with other gay people. I don’t know whether there is an evidence base for that assumption, but it doesn’t seem unreasonable. And it seems even less unreasonable to assume that gay people will feel a greater degree of solidarity with gay groups than will heterosexual people.
2. If the assumption in 1 is right, then the under-representation of ‘likes’ by gay people of gay groups must mean something. And it is hard to know what else it could mean.
3. No. Nor have I suggested it.
Charles, it is your arguement that is spot-on, and frankly Anthony appears to be making excuses for something egregious.
This is a critical point that should be brought to bear upon all current debates over privacy:
There is a difference between “being seen,” “being watched,” “being stared at,” and “being stalked.”
The pre-cyber-age line that “one has no right to an expectation of privacy when in public” fails utterly to make those critical distinctions.
What Facebook is doing, and Google as well, is indistinguishable from stalking except in that the ultimate outcomes of both do not yet include overt violent crime against the victims. All of the other elements of stalking are present: obsessively following individuals to ascertain and make inferences about details of their lives that they do not consent to have known or predicted.
Further, the reply article by Hannah Maslin is also so much horse manure precisely because it fails to make that distinction, and instead treats “prediction” as something different to “knowledge.” The blunt fact is that in the circles where these algorithms are designed and applied, the word “prediction” usually occurs in the phrase “prediction and control.” The goal is not only to predict you, but to control you, as far as possible without committing an overt crime along the way. Think of date rape by the use of hypnosis rather than by dosing someone with drugs: much easier to claim it was mere seduction.
Facebook and Google are seeking to create a panoptic society that resembles nothing so much as the former East Germany, with the same ultimate effect: a pervasive chill and a mind-numbing conformity. And anyone who believes that the only usages of their vast dossiers will be to send you ads for things you secretly want but never told anyone, has a hole in the head where their critical thinking belongs. Those databases and dossiers are tasty morsels for any tyrant who ever comes along: and we would be the worst sort of fools to imagine there will never be another.
You’re right on your point 3, Charles. My apologies : you didn’t suggest that ALL inference to traits from revealed behaviours was unethical. But you argue that this particular type of inference from Facebook by computer algorithm is unethical, and I’m not sure how or where you draw the line. You argue that the fact that certain people who are not aware that this can be done would regret their choices, or possibly be harmed, and that this makes it unethical. But I wonder whether this type of argument from ignorance or naïvety stands up.
If in a form of «Desert Island Discs» I publicly choose a whole bunch of things that I like, can I justify preventing others from drawing their own conclusions about me?
Why is Facebook different? Or is it the fact that computers are used? Or statistics? Or pyschological biometrics? Or what?
Anthony: many thanks. I draw the line by reference to consent. If you go on Desert Island Discs, you know that your listeners will draw inferences., and have effectively consented to them doing so. You cannot complain that their mental algorithms will put together all the facts you’ve mentioned and build up a picture of you. As I said in the post, if and when the use of the FB algorithm becomes universally known, the same will apply to FB likes. But until then, it will be illegitimate for the algorithm to piece together bits of information that you did not know could be connected.
Hello Charles,
I thought you might like to look at this piece, written in 1962. We can’t say we weren’t warned ! (Even though he doesn’t mention Facebook directly)
https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol-56-no-4/pdfs/Clotworthy-Imaginative-Use-of-Computers.pdf
Anthony: very many thanks. A really fascinating read.
Hello there, just became alert to your blog through Google, and found that it is truly
informative. I’m gonna watch out for brussels. I will be grateful if you continue this in future. Many people will be benefited from your writing. Cheers!
Comments are closed.