Skip to content

Guest Post: Cambridge Analytica: You Can Have My Money but Not My Vote

  • by

Emily Feng-Gu, Medical Student, Monash University

When news broke that Facebook data from 50 million American users had been harvested and misused, and that Facebook had kept silent about it for two years, the 17th of March 2018 became a bad day for the mega-corporation. In the week following what became known as the Cambridge Analytica scandal, Facebook’s market value fell by around $80 billion. Facebook CEO Mark Zuckerberg came under intense scrutiny and criticism, the #DeleteFacebook movement was born, and the incident received wide media coverage. Elon Musk, the tech billionare and founder of Tesla, was one high profile deleter. Facebook, however, is only one morally questionable half of the story.

Cambridge Analytica was allegedly involved in influencing the outcomes of several high-profile elections, including the 2016 US election, the 2016 Brexit referendum, and the 2013 and 2017 Kenyan elections. Its methods involve data mining and analysis to more precisely tailor campaign materials to audiences and, as whistle blower Christopher Wylie put it, ‘target their inner demons.’1 The practice, known as ‘micro-targeting’, has become more common in the digital age of politics and aims to influence swing voter behaviour by using data and information to hone in on fears, anxieties, or attitudes which campaigns can use to their advantage. This was one of techniques used in Trump’s campaign, targeting the 50 million unsuspecting Americans whose Facebook data was misused. Further adding to the ethical unease, the company was founded by Republican key players Steve Bannon, later to become Trump’s chief strategist, and billionaire Republican donor Robert Mercer.

There are two broad issues raised by the incident.

On one level, the Cambridge Analytica scandal concerns data protection, privacy, and informed consent. The data involved was not, as Facebook insisted, obtained via a ‘breach’ or a ‘leak’. User data was as safe as it had always been – which is to say, not very safe at all. At the time, the harvesting of data, including that of unconsenting Facebook friends, by third-party apps was routine policy for Facebook, provided it was used only for academic purposes. Cambridge researcher and creator of the third-party app in question, Aleksandr Kogan, violated the agreement only when the data was passed onto Cambridge Analytica. Facebook failed to protect its users’ data privacy, that much is clear.

But are risks like these transparent to users? There is a serious concern about informed consent in a digital age. Most people are unlikely to have the expertise necessary to fully understand what it means to use online and other digital services.  Consider Facebook: users sign up for an ostensibly free social media service. Facebook did not, however, accrue billions in revenue by offering a service for nothing in return; they profit from having access to large amounts of personal data. It is doubtful that the costs to personal and data privacy are made clear to users, some of which are children or adolescents. For most people, the concept of big data is likely to be nebulous at best. What does it matter if someone has access to which Pages we have Liked? What exactly does it mean for third-party apps to be given access to data? When signing up to Facebook, I hazard that few people imagined clicking ‘I agree’ could play a role in attempts to influence election outcomes. A jargon laden ‘terms and conditions’ segment is not enough to inform users regarding what precisely it is they are consenting to.

Moreover, one could question the voluntariness of using social media services like Facebook. Facebook and other online services have become almost a necessary part of participating in certain domains. For example, businesses without social media presence may be perceived as illegitimate and suffer from a significant disadvantage in target audience reach compared to competitors who do. Facebook has also become a key tool in organising social gatherings or communicating with members of a group or organisation. For some, the costliness of abstaining from social media apps introduces an external pressure to join, undermining the voluntariness of consent.

Of course, most decisions in daily life are not fully informed. Some decisions, however, have a higher bar for consent than others, because the degree of information necessary for meaningful consent depends on the significance of the decision.

There seem to be special cases in which requirements for informed consent are reasonably stringent. Healthcare interventions are one such special case. Consent for a medical procedure requires a patient to understand its purpose, benefits, risks, and alternatives, and then to form a decision based on this information. We might find it particularly important to be fully informed about medical procedures because it involves the body, and it can significantly affect our ability to live a good life. Perhaps consenting to data collection, storage, distribution and use in a digital age is another special case. If data could be a window into our minds, and could facilitate targeted behaviour change strategies by external parties like governments and thereby undermine the right to self-determination, then the significance of consent in the digital context should rise accordingly. It is important to note, however, that while the significance of data and its potential uses ought to be made clearer to users, fully informed consent may not be a feasible goal for many technology non-experts. Stronger data protection laws and policies restraining misuse of data are therefore necessary.

A broader issue is how the misappropriated data was used to undermine political privacy.

Democracy is founded on self-determination. In the words of Thomas Emerson, ‘democracy assumes that the individual citizen will actively and independently participate in making decisions’ (emphasis mine, Emerson, T. 1970. The System of Freedom of Expression). For individuals to appropriately participate in the democratic process, political privacy is necessary to protect against undue pressures or external influences, hence practices like the secret ballot. Micro-targeting and psychological manipulation strategies may impair the ability of voters to freely reason through and choose what type of society they support. Moreover, the political opinions of the very rich or very powerful with the means to either create or hire organisations like Cambridge Analytica should not have a greater weight by virtue of their influence over the general population. Although this may seem idealistic and removed from reality, it does not follow that we should allow existing inequalities to worsen.

Though most coverage of the Cambridge Analytica scandal has focused on its role in Trump’s election campaign and the UK’s VoteLeave campaign, we should be no less disturbed by its alleged role in the 2013 and 2017 Kenyan elections, and 2015 Nigerian election. While data protection laws in the US and UK need reworking, they are weak rather than absent. Data-driven foreign political consultancies have no place in countries with few data protection laws, potentially unstable political systems relatively new to free elections, or tenuous freedom of the press, speech, and association. Whichever lens of normative ethical theory one looks through, it can hardly be ethical to use fragile civil and political rights of vulnerable populations for selfish financial gain.

The Cambridge Analytica scandal has served as a wake-up call on two fronts. Firstly, data protection laws and procedures need urgent reworking. While big data companies like Facebook ought to have been stricter with user data, legislation needs to keep up to date with technological advancements rather than relying on companies to behave responsibly. Secondly, there should be limits to the reach of big data and what it can be used for. Data-based micro-targeting organisations like Cambridge Analytica undermine the rights to political privacy and self-determination, and so are deeply immoral.

Share on

1 Comment on this post

  1. You supported your premise, that interfering with individuals’ exercise of their free-agency for profit as was the intent of Cambridge Analytica’s data-based Political micro-targeting, is unethical and totally immoral.

    Thank you for doing a great job of explaining this!

Comments are closed.