Skip to content

Oxford Uehiro Prize in Practical Ethics: Should we completely ban “political bots”? Written by Jonas Haeg

  • by

This essay was the runner up in the Graduate Category of the Oxford Uehiro Prize in Practical Ethics 2017

Written by University of Oxford student, Jonas Haeg

Introduction

This paper concerns the ethics of a relatively new and rising trend in political campaigning: the use of “political bots” (henceforth “polibots”). Polibots are amalgamations of computer code acting on social mediate platforms (Twitter, Facebook, etc.) so as to mimic persons in order to gain influence over political opinions amongst people.

Currently, “many computer scientists and policy makers treat bot-generated traffic as a nuisance to be detected and managed”[1]. This policy and opinion implies a particular ethical view of their nature, namely that there is something inherently morally problematic about them. Here, I question the aforementioned view of polibots. After presenting a brief sketch of what polibots are, I formulate three potential arguments against their use, but argue that none of them succeed in showing that polibots are intrinsically morally problematic.

Polibots.

A polibot is set up on a social media platform with a set of commands for its behaviour on that platform. Here I focus is one what I call “content-bots”: bots programmed to share certain content. These can be programmed to share praise, hate, news articles, facts; or to repost certain people’s posts online. Polibots also needs rules specifying the frequency of posting. Very likely, people program them specifically to make them appear like humans, e.g. there are times at which the bot “sleeps”, “works”, etc. Importantly, I also restrict attention to what I’ll call “modest” content bots. It is clear that many polibots today are involved in harassment, spreading falsehoods (e.g. false news articles or other information) and other forms of objectionable content. Obviously, these polibots are wrong, but this is merely because the content is in general objectionable. To investigate the intrinsic problem with polibots I ignore such obviously objectionable bots, and focus of the “modest” ones which are limited to spreading genuine information (e.g. genuine news articles, statistics, etc.) and non-offensive praise and critique.

What the use of polibots add to the personal use of a social media account by a single individual is this: an enhanced scope of reach and influence. By polibots, an individual can reach a much larger audience by posting with much higher speed and frequency than they could alone. For this reason, I claim we should understand polibots fundamentally as a communication tool. For this essential feature of polibots is also what is shared by many other paradigmatic examples of enhanced means of communication that we frequently use in the political sphere. TV adverts, news articles, phone banking, posters, and so on are all means by which we aim to communicate a message (and thereby gain influence) to a much larger audience than we could alone (i.e. by going up to people in the streets ourselves and share the message). The use of polibots is just another form of such communication tool enabled by technology. It is merely another way for people to exercise their right (and desire) to support candidates and policies. Below, however, I’ll try to sketch a series of argument for why one nevertheless might object to the use of polibots.

Objections

Objection 1 – The “Illegitimate Tool” Objection

  “Polibots are too effective communication tools. They give their owners “super-human” means of spreading their political message.”

Even if we in general accept rights to political participation and free speech, some might worry that it is illegitimate to use polibots for such tasks. The reason being that they give persons too much power; overly enhanced ability to share their opinions. We generally take persons to be entitled to a political voice, but because of the speed and stability by which polibots operate, using polibots is akin to having several voices. No human alone could reach that level of effectivity by herself.

There are two problems here. First, what is it that one compares the use of polibots with? If I was completely alone, without the ability to rely on technology, the only people I could reach with my message were the ones I could meet in the streets on my own. Surely, we think I am entitled to some technology. Just sharing my thought online allows me to reach many more than I could without any technology. The same goes for writing op-eds for news papers, and so on. It is thus hard to know where to draw the line. My suspicion is that a large part of this sort of scepticism towards polibots is only due to the fact that it is – potentially – more effective than older forms of communication technology. In other words, a “status quo bias” of sorts.

Second, this worry seems to be mostly a “number/quantity worry” rather than polibot problem per se. After all, using 1 polibot doesn’t seem to be too effective a communication tool, at least not compared to the impact that TV adverts and so on can have, all of which are thought to be legitimate. Granted, we might be more sceptical about someone using 10000 polibots. Similarly, however, we find it morally questionable if some rich person to buy up all free TV time on several channels to share his political opinions. Or a person who calls up his friends every 5th minute to do the same. The explanation of the wrongness, in all cases, isn’t to do with the tools themselves but rather to do with the fact that any means of communication can become objectionable given a certain quantity of number of its use. Any means of communication, over a certain threshold, becomes a nuisance and that is what is wrong.

Objection 2 – The Deception Objection:

The use of polibots inevitably creates a false impression of popularity and thereby gains political influence though deceptive means.

If people implicitly interpret user profiles online as being the voice of one person, then it is possible that their perception of the support for a candidate or issue, X, is false.  This is worrying because perceived popularity can influence people’s political opinion[2]. In short, then, using polibots enable persons to gain political influence through deceptive means (i.e. creating a false impression of popularity). That is why they are inherently objectionable.

For reasons of space, let me highlight only two problems with this argument. First, it is at best a “number/quantity problem” again. Often, the number of actual X-supporters will be in the millions so that any agent online has no possibility of encountering them all online. If actual supporters are in the millions, and Chris encounters a few thousand online, does it really matter whether some of those online are polibots or genuine? Either way, he won’t be fooled about the actual number of X-supporters. In some cases, Chris might make some assumptions about how many real supporters there actually are based on how many he meets online. But then, he’ll nevertheless get to the same estimate if he meets, say, 500 genuine users or 500 fake users. He’ll only come to false numbers or estimates in those smaller cases where the number of genuine supporters plus polibots he meets outnumber the the actual supports – and this, moreover, will be limited to very small scale voting scenarios. Suppose that, in a town election, there are 1000 X-supporters and Chris meets all of these plus 500 polibots. In these limited case, and if Chris is actually fooled, polibots might have the consequence feared. Note again, however, that this is a worry about numbers and not polibots per se. At best, the worry above speaks not in favour of banning all polibots, but rather having regulations about the numbers of polibots must be proportionate to the size of the issue.

Similarly, we need to question the other assumption: the influence that perceived popularity has on political opinion. If false perception of popularity had no such consequences, it would be much less objectionable. Even though none supports me, it would seem fine for me to pay lots of people to hang up posters if this will have no effect on voting results. It is therefore important to note that few people switch sides merely due to perceived popularity[3]. Granted, it might play some role in their deliberation, and those on the fence might be more swayed, and a very few number might consistently base their votes on perceived popularity. But any wrongness associated with this might be outweighed by the intended effects: getting people more informed. After all, the polibots in question are sharing genuine news articles, statistics, and so on about or against certain candidates. This is the type of information that we want people to base their political opinions on. Creating a false impression of popularity around a candidate might then get people – both on the fence, and on the other side – to access the information these polibots aim to share, and in that way influence their political opinion. This is a virtuous goal, however, even if it has some negative side-effects.

Objection 3 – The Artificiality Objection:

“Polibots are artificial non-persons, so using them involves introducing artificial non-persons into online political discourse which by definition ought to be a sphere reserved for political agents only.”

Even though person must program polibots, some might argue that there is a sense in which it is not a person who is sharing the political content and engaging in political discourse online, and that this is objectionable because we shouldn’t introduce artificial non-person into a sphere which is supposed to only be open to political participants. Polibots are corrupting political discourse.

The worry clearly stems from issues about deception. People online might wrongly take themselves to encounter genuine political agents when they encounter polibots. But consider this choice: either I could act in a certain way online, or I could take a vacation and program all my intended online-actions into a polibot to act in my place. If the critique above is correct, then it should be wrong for me to choose the latter option. It appears that the proponents of this argument seemingly only accepts political behaviour online if there are person behind the computer typing in real time. This is absurd. Surely it would be permissible for, say, a politician to set a certain number of emails on “auto-post” after he has been on vacation for two days. The case is similar with polibots because they are so simple. They do not think on their own. They need strict rules about exactly when and what to post. For that reason, using polibots is more like the email-case. Some person had the intention to post all these posts, but he isn’t typing them in real time. Instead of typing the posts himself in real time, the creator posts them “remotely” – via a polibot whilst he asleep or eating dinner, say.

Some might point out that polibots do not inhering their creator’s identity online, and also come in addition to the creator’s personal profiles. That is what makes them wrong. However, we seem to accept both anonymity (e.g. false names online) and people having multiple avoiding online without making this clear.    I therefore reply: if we accept that it is permissible for me to set up a separate account with a hidden identity online and write political posts on that, then we should also accept it in the cases where this latter activity is first written into code and then enacted online instead of being my typing in real time (i.e. polibots). As a community, we evidently allow the first (it is permitted in the policies on Twitter, for instance), and I see little reason for why we should be against it unless we start completely overhauling the nature of social media platforms to only allow persons one user each.

Conclusion

Above, I have attempted to disarm some initially intuitive arguments for the inherent impermissibility of polibots. That being said, I’ve highlighted certain potential issues and instrumental worries. There is more work to be done in the future, especially related to policies and rules concerning the the content and number of polibots.

References

Snyder, B. (2012) How polls influence behavior. Available at: https://www.gsb.stanford.edu/insights/how-polls-influence-behavior (Accessed: 24 January 2017).

Woolley, Samuel. (2016) ‘Automating power: Social bot interference in global politics’, First Monday, 21(4).

 

[1] Woolley (2016)

[2] Snyder (2012)

[3] Woolley (2016)

Share on