Facebook has changed its privacy settings this January. For Europeans, the changes have come into effect on January 30, 2015.
Apart from collecting data from your contacts, the information you provide, and from everything you see and do in Facebook, the new data policy enables the Facebook app to use your GPS, Bluetooth, and WiFi signals to track your location at all times. Facebook may also collect information about payments you make (including billing, shipping, and contact details). Finally, the social media giant collects data from third-party partners, other Facebook companies (like Instagram and Whatsapp), and from websites and apps that use their services (websites that offer “Like” buttons and use Facebook Log In).
The result? Facebook will now know where you live, work, and travel, what and where you shop, whom you are with, and roughly what your purchasing power is. It will have more information than anyone in your life about your habits, likes and dislikes, political inclinations, concerns, and, depending on the kind of use you make of the Internet, it might come to know about such sensitive issues as medical conditions and sexual preferences.
To Facebook’s credit, their new terms of services, although ambiguous, are clearer than most terms of services one finds on the Internet. Despite the intrusiveness of the privacy policy, one may look benevolently on Facebook: if their terms are comparatively explicit and clear, if users know about them and give their consent, and if in turn the company provides a valuable free service to more than a billion users, why should the new privacy policy be frowned upon? After all, if people don’t like the new terms, they are not forced to use Facebook: they are free not to sign up or they can delete their account if they are current users.
A closer look, however, might reveal the matter in a different light. The first element that might make one suspicious of the benevolent interpretation is the motive behind Facebook’s change of policy. The ultimate goal is, unsurprisingly, to earn more money. Facebook wants to sell more advertising at higher rates. To do that, it will exploit every bit of data to enable advertising to be more and more targeted, as targeted advertising is believed to be more effective. Perhaps this strategy is justifiable. One might think that only the naïve believe that there can be such a thing as a free service on the Internet; one way to look at Facebook is to think that it is a platform that allows users to sell their private information in exchange for the social benefits of belonging to the networking website. It is noncontroversial that a business needs revenue to survive, and Facebook is a business, not a public service. Given how well Facebook was doing as a business before this policy change took effect, however, one might question whether there might be a limit as to how much a company can justify by appealing to financial gain. Facebook’s policy change is not merely allowing the company to survive and flourish, but rather it will foreseeably make it earn an inordinate amount of money. Furthermore, while Facebook stands to win something that is arguably not morally significant (more money than is needed for it so be a highly successful company), users stand to lose something that is morally significant: their privacy. It can be objected that users also profit from something (i.e., publicity that is tailored to them), but I doubt many people are willing to count targeted advertisement as a valuable gain.
Many will not be convinced by this argument. One might think that businesses exist to make as much money as they possibly can, and as long as users consent to the terms of service, there is no wrongdoing on the part of companies. It is unclear, however, what kind of consent is needed from users.
If you are a Facebook user, you will have received in 2014 a notification that read: “By using our services after Jan 1, you agree to our updated terms, data policy and cookies policy and to seeing improved ads based on apps and sites you use.” The kind of consent Facebook is obtaining from users is, at best, implicit, rather than the explicit consent in the form of an “I Agree” button that is required by European law. Legal issues apart, it can also be argued, morally, that the consent acquired is invalid in virtue of the coerciveness of the change of policy.
According to Alan Wertheimer (1987), a proposal is coercive if it is made in the form of a threat that would make the recipient worse off than she ought to be (i.e., “If you do not do what I want you to do, I will make you worse off than you ought to be”), and if the “choice” forced upon the person coerced is such that she in fact has no reasonable choice but to accept the proposal. At least for people who are already Facebook users, the new change of policy can easily be interpreted as a threat that would make people worse off than they should be: If you do not accept the new policies, you will be locked out of the site, which implies being cut off from friends, family, work colleagues, social groups, etc. When one first agrees to the Terms of Service of a business, one expects the company to stick to their end of the deal, and it is highly questionable for Facebook to feel it is entitled to change its Terms of Service whenever it sees fit.
One might argue, however, that the new policy is not coercive because Facebook users do have a reasonable choice. Closing one’s Facebook account is not the end of one’s social life. We can meet our friends for coffee, talk to our families at home and on the telephone when we are not at home, and spend time with our work colleagues at lunchtime or at the pub. As more people and institutions have Facebook pages, however, the more costly it becomes to close one’s account. People who close their accounts do lose hundreds of connections—the kinds of relationships that are not close enough to be maintained by personal interaction, email, or phone calls. This is particularly true for people who live outside their home countries; given our time limitations, there are only so many people one can stay in touch with through email and Skype, and that number is significantly smaller than the number of people with whom one can stay in touch through Facebook. While a robber can exercise coercion by imposing on you the odious choice of “Your money or your life,” Facebook imposes on you the choice of “Your privacy or (many of) your relationships.”
One might think that the coerciveness of the new terms is only true for people who were Facebook users before January 2015, but not for people who want to open an account today. The argument behind this idea goes something like this: while the new privacy policy makes old-time users worse off if they decide not to accept the new terms (by locking them out of their Facebook accounts and stopping them from benefitting from everything they have accumulated there), new users cannot be said to be made worse off if they decide not to accept Facebook’s Terms of Conditions because they do not have an account in the first place. Yet the correct baseline is a normative one, not a descriptive one. In other words, what matters is not the situation of the person before being exposed to the proposal of joining Facebook, but rather what the situation of that person should be. Facebook has changed what normal standards are in some contexts. A person who lives in a developed country and does not have Facebook today cannot be compared to a person who did not have Facebook before it existed, twenty years ago. People who are not on Facebook miss out on a variety of opportunities: they miss occasions to congratulate people for their birthdays, weddings, etc. (when they do not have other ways of learning about these events), they cannot access information from institutions that only publish updates on Facebook, if they are academics they may miss new publications that authors publicise on their walls, they are precluded from participating in a myriad of social groups (revolving around common interests, nationalities, housing, etc.), they miss out on events that are only announced on Facebook, etc.
It thus seems that people who are not on Facebook are put at a significant social and professional disadvantage by not enjoying the benefits that have become the common standard in their social circles. Many will think that non-Facebook users are fully responsible for their disadvantage, since they are free to decide whether they want to use the social network or not. Yet it seems that there are strong moral reasons to believe that people should not be forced to choose between surrendering their privacy to businesses or suffering marginalisation.
References
Wertheimer, Alan (1987). Coercion, Princeton: Princeton University Press.
—
For a legal perspective on Facebook’s new privacy policy, visit Ben Zevenbergen’s blog; I owe it the inspiration for this post.
Excellent post, very persuasive. However, I do take issue with this claim: “I doubt many people are willing to count targeted advertisement as a valuable gain.”
I do think targeted advertisement is a valuable gain in several ways. I find many ads offensive, and targeted ads should be tailored to my beliefs and preferences, which is a benefit to me. Targeted ads also don’t need to be as attention-grabbing and annoying as traditional ads, because they can assume you do have a level of interest in the product to begin with. And advertising can provide real benefits, by informing us more consistently of products that will improve our lives. Many people will roll their eyes at this last point, but if people didn’t think products could improve their lives, then they wouldn’t ever buy anything.
Thanks, Cody. Yes, it is quite likely you are right: maybe people do value targeted advertisement more than I thought. The crucial question is whether it is valuable enough for people to be willing to give up their privacy for it. Also, there are degrees of tailoring. If one is worried about offensive ads, a less invasive method is the one previously used by FB, where one has the option to veto advertisements that one finds offensive. That is a long way from giving up one’s location and financial details.
I find targeted advertisement more attention-grabbing and distracting, precisely because I might be tempted by what is advertised.
It seems to me that the most effective advertisement often makes people spend money they don’t have on things they don’t need (in a wide sense).
An even more interesting issue, potentially, are the questions sites like Facebook raise about self identity. The contention of the article is that data collection practices by Facebook impinges upon our privacy, but this runs the risk of conflating important information that we would only share with a trusted confidant (our desires, fears, etc) with comparatively trivial information that is already knowable by strangers (eg that we walk in a public park on a Sunday). What frightens us may be the way Facebook collates the latter sort of information, which on its own does not feel quite so vulnerable. There is research suggesting aggregating seemingly trivial data does indeed give insight into more fundamental aspects of personality (http://youarewhatyoulike.com), but I would like to see philosophers challenge this fairly thin conception of self identity. Maybe we will still care as much about our privacy settings, but maybe not.
Thanks for the comment. Aggregation is indeed a fascinating topic. I would like to tackle it some time from a moral point of view. I tend to think that we can lose privacy through the aggregation of public information that by itself is not sensitive; I still have to think more about the issue though. Thanks for the link—interesting stuff.
Facebookes la nueva esclavitud voluntaria de la humanidad.
La manipulación y venta de información es algo verdaderamente repulsivo.
Hay que seguir a la tecnología pero cuidándonos de no perder la libertad
Thank you for this interesting post!
It might just a detail but it is not clear to me, however, whether the source of the issue is coercion as defined by Wertheimer (“If you do not do what I want you to do, I will make you worse off than you *ought to be*”) or something a bit different, which is simply “I will not give you what you are *entitled to*”.
I think you yourself are oscillating between the two interpretations when you write
To me the second interpretation is independent of the first in that it constitutes a reason for blaming Facebook for something which need not be essentially tied to doing harm, but rather, to failing to meet a sort of duty (i.e. “stick to ones end of the deal” in your words). For the normative source of this duty seems to be something like basic values or norms that constitute (implicit) preconditions for explicit agreements (i.e. sharing relevant information, being sincere) quite independently of any harm.
For instance, the first of these duties seems to pertain to the case at hand if by “relevant information” one means something like “information which it would be in the best interest of the user to have to pass a judgment on whether she should give her consent to continue the agreement”. Applying it to the case at hand: any user is *entitled* to benefit from this duty, and therefore, can safely assume that this duty will be fulfilled as a precondition to any change in Facebook’s terms of service.
If that is correct, if such a duty exists, then we can safely say that Facebook is blameworthy for not informing its users sufficiently in advance of the upcoming changes to its terms of service, while Facebook should have seen that, given the nature of these changes, it would be in the best interest of its users to have time to make up their mind.
Of course, it is clear that this issue would be aggravated by any form of coercion in Wertheimer’s sense. But it does not reduce to it, and more importantly, it does not imply it. Suppose all social networks were compatible with one another in some way, and that Facebook allowed its users to export all their data to rival social networks. No social bound or activity would be lost with people leaving, and thus leavers would not be made worse off. More generally, suppose Facebook made no extremely significant changes but dozens of minute, less significant changes (i.e. changing the interface, changing the order of actions necessary to perform a certain function, changing relations between users, i.e. forcing people to rank each other on a “friendship scale”, etc.) without telling its users sufficently in advance and with sound reasons. I think Facebook would be just as blameworthy in that situation as it is in the actual situation, and for the exact same reason.
Thanks for the comment. For Wertheimer’s definition to apply, two conditions must be met: 1. There must be a threat of leaving the recipient worse off than s/he ought to be, and 2. The recipient does not actually have a reasonable choice; s/he must accept the conditions of the threatener.
My point about Facebook not sticking to their end of the deal (the Terms of Service that were presented to us when we first signed up for FB) was meant to motivate the first condition. It seems to me that if FB does not give us what we are *entitled to* (as you say), it leaves us worse off than we *ought to be*. If we are entitled to something, we ought to get it. There is a wrong being commited by not respecting the original Terms of Service, and there might also be harms committed, if the changes have bad effects for users (as they do). One thing that complicates the matter is when Terms of Service include the clause that “These Terms of Service may change at any time without notice.” While it shields the company, legally, it seems that it is a morally questionable clause to include because it makes negotiation between users and company completely asymmetric. It makes it obvious that the Terms of Service are not a kind of “contract” where the users and the company commit to doing certain things and not doing certain other things to achieve a kind of fair transaction, but rather, the ToS are only there for the legal protection of the company. I find that morally problematic.
If all social networks were compatible (like phones are), and people could move to another social network (with Terms of Service respectful of their privacy) and take their connections and information, then there would be no coercion because the recipients of the threat would have a reasonable choice to make (i.e. they would not have to choose between their privacy and their relationships, but rather between FB and another company). In this case, FB might be morally blameworthy for not keeping to their original Terms of Service, but they could not be blamed for coercion.
This puts us on the same page, thank you for the clarification.
Comments are closed.