Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Trenton Andrew Sewell 

Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.

This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners.

This essay will demonstrate that:
1) Respect for user’s MA entails SMCs should use AI in a:
a. Transparent, reason-giving way,
b. Based on clear rules and,
c. With an option for appeal.
2) But failing to meet these standards does not proscribe using AI. It is a necessary means of discharging important obligations.

Ideal CM AI
People have rights we should respect. This cred is the basis of this essay. However, rights include substantive rights such as to expression. Here, I am presuming that any moderated content is a legitimate target. Hence, moderating this content simpliciter does not violate users’ rights because SMCs could permissibly moderate the post/user.

The question that remains is what ‘procedural-rights’ users possess. How should SMCs respect users whilst moderating? Here, I address the procedural-rights users have because of their ‘moral agency’ (‘MA’).

MA is the capacity of an agent to understand moral reasons. Respecting the dignity of a person involves treating them as a moral-agent[2]. This requires engagement in moral reasoning[3]. Moral reasoning is the process of giving reasons concerning the justification of an act. Engagement in moral reasoning acknowledges one’s MA and dignity – a basic Kantian requirement[4].

Applying MA to Moderation
Moderation is akin to punishment. H.L.A Hart defined punishment “in terms of five elements:
1. …consequences…considered unpleasant.
2. …for an offence against…rules.
3. …of an…offender for his offence.
4. …intentionally administered by [another] and
5. …administered by an authority constituted by the…system.”[5]

Moderation removes posts and restricts access to platform features which is unpleasant. It occurs to ‘offenders’ for breaching the community guidelines. It is intentionally administered by SMCs which created authorities who impose moderation. It satisfies Hart’s 5 elements.

If moderation is punishment, then respecting MA in the process of moderation will be similar to respecting MA in the process of criminal punishment. That involves giving reasons why the act/offence was wrongful, and why the response to the act/offence was just.[6]

Hence, SMCs respect users’ MA whilst moderating if they:
1) Provide moral reasons to users why they ought not post certain content and;
2) Provide moral reasons to users why they are moderating[7]

SMCs should give users reasons why the guidelines were violated, and why moderation was the right response. CM AI must be, in other words, transparent[8].

Respect for MA requires more than granting reasons. It requires the option of appealing an AI’s judgement to a human moderator.

“Penalizing someone for violating the rules…reasserts our shared values…calling something hate speech…is a….performative assertion that something should be treated as hate speech and…undoubtedly, it will be disagreed with” [9].

Users should be free to question whether such an assertion is an accurate representation of the guidelines. A moral-agent is also a giver, not merely a receiver, of reasons. To engage in the moral reasoning which respects one’s MA, SMCs should give users the option to justify their post.

Furthermore, to respect users, AI should use rules which are prospectively clear. Respecting people as moral-agents is to regard them as able to follow rules they are aware of[10]. Part of what legitimizes punishment is that the user could have complied with the rule.

To respect users as moral-agents, AI should facilitate users’ compliance with rules. CM AI should be:
i) Based on rules;
ii) Which are published;
iii) Prospective,
iv) Intelligible;
v) Free from contradiction;
vi) Possible to follow;
vii) Not constantly changing and;
viii) With congruence between the rule and official actions.[11]

If CM AI satisfies these eight principles, then it respects users by recognizing their MA and furthermore, providing rational freedom.

Moral-agents should not face ‘bolts-from-the-blue’. Their freedom should not be dependant on an AI’s whims. The guidelines that the AI follows should allow users to know whether they are in, and avoid, non-compliance.

This prospective clarity enhances the morality of CM AI by providing ‘freedom from domination’:

“[freedom] is not…the availability of…choices. It is conceivable that a free man might have fewer options…than a slave…[But] we think of slavery as the…embodiment of unfreedom…because…the conditions under which he enjoys…options…are…dependant upon the will of the master.”[12]

Clear rules liberate one from dependence/domination. A user’s freedom is not dependant on the SMC but rather, on the rules which equally constrain moderators.

But why accept that ‘punishment’ by SMCs should respect moral agency? State punishment of crimes might need to – but why content moderation?

Because all should respect each other as moral agents. To do otherwise is to disrespect our dignity. Insofar as moral agency is only consistent with certain procedures of punishment by the state, I see no reason why (as an ideal matter) it would impose fundamentally different requires on punishment by family, friends, strangers, or crucially here – SMCs.

In summary:
Moderation is punishment. To respect MA whilst punishing, SMCs must use transparent AI which gives users reasons to justify SMCs’ response. Furthermore, respecting MA requires that AI decisions are appealable to a human moderator. This provides the opportunity for moral discourse which further respects MA. Lastly, respecting MA requires that the AI’s decisions allow the user to prospectively avoid non-compliance.

Unideal AI?
Whilst the prior section explored how a CM AI can respect user’s MA, it neglected two questions. Does CM AI currently respect MA? If it does not, should SMCs continue to use AI which violates procedural rights?

The answer to the first question is no. “A common critique of automated decision making is the…lack of transparency…[It is]…difficult to decipher…the specific criteria by which…decisions were made”[13]. Furthermore, systems of appeal, such as Facebook’s “Supreme Court”, are available to very few users[14]Finally, users report not knowing when they will be moderated, leading to confusion, and anger[15].

The answer to the second question – should SMCs use unideal AI – is complicated.

One could answer: if MA should be respected, then SMCs are not at liberty to use CM AI unless it respects user’s MA. In short, if CM AI is not transparent, appealable, and prospectively clear, it should not be used.

This view is flawed because SMCs do not only have process obligations. They have substantive obligations to their users and owners.

For its users, SMCs could be obligated to prevent the spread of toxic content, terrorist propaganda or child exploitation. To do otherwise is to become complicit. Christopher Bennet explained this complicity, and its corollary obligations, as resulting from ‘normative control’: “control over whether the…act is done with permission or not”[16]. The wrong done by “a car owner who permits another to engage in reckless… [driving]…[is]…that the owner could and should have…withdrawn his consent”[17]SMCs can – through moderation – to determine whether an act is ‘permissible or impermissible’. “[W]herever [SMCs] [do] not mark some act as impermissible, it regards it as permissible…It can be complicit in allowing…acts to be permissible where it should have made them impermissible…complicity…comes about through a failure to [moderate]”[18]SMCs have an obligation to its users to moderate content (the scope of which is a matter for later investigation).

Furthermore, SMCs have shareholders/investors. “A corporate executive is an employee of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make…money”[19]. When an agent is managing money belonging to another, we traditionally accept she is obliged to act with regard for the principal’s interests. Those same obligations bind all SMCs baring those which are owner operated[20].

These substantive obligations answer whether SMCs should use imperfect AI, because using CM AI is crucial for discharging of these duties. Even if AI is imperfect, SMCs are obliged to use it for CM; CM AI is needed to SMC’s obligations to different stakeholders.

Given SMCs’ size general size, CM requires AI. Yan LeCunn – Facebook’s chief AI scientist – has stated that: “Without AI there would not be any possibility of…speech filtering, detecting harassment, child exploitation, or terrorist propaganda”[21]To adequately meet SMCs’ substantive obligations to not be complicit in certain harmful conduct, SMCs needs to use AI.

A potential response is that it is “size…that makes automation seem necessary… [and]…size can be changed”[22]. Specifically, “if moderation is…overwhelming at…scale, it should be understood as a limiting factor on…growth”[23]. SMCs should accept making less profit to reduce the need for CM AI.

However, this neglects their obligations to owners. Even if SMC could make moderation respect users MA by setting growth aside, they would breach their fiduciary obligations to owners. Furthermore, SMCs are under pressure to moderate from the public. Not moderating could result in a harm to their brand, ability to recruit talent etc. Moderation is likely in owner’s interests.

Not using CM AI would result in SMCs failing their substantive obligations to either its users, its owners, or more likely both. Yet, one could say that if a ‘right’ to be recognized as moral-agent exists, SMCs should not violate it. Procedural-rights are side-constraints which requires not using imperfect AI. What this neglects is that X being a right does not mean it is of equal importance to right Y. If all obligations cannot be simultaneous met, then choices must be made about which obligations should be unfulfilled.

I would contend that procedural-rights in CM are some of SMCs’ least important obligations. Users who have posted content eligible to moderation are the reason a trade-off of rights is necessary. If they had not done wrong, then the SMC would not need to decide whether to respect their procedural-rights or the substantive rights of its users or owners. If a set amount of cost must be imposed, then it seems appropriate to apply that cost upon the individual most responsible – the user being moderated[24]. Since not using CM AI would result in SMCs failing their substantive obligations, and these obligations are more important, procedural obligations cease to really matter. Human moderation is not feasible, and imperfect CM AI is preferable to no moderation at all. SMCs should use AI because it discharges their more important duties. Nevertheless, insofar as SMCs can improve their CM AI to bring it closer to the ideal, they are obliged to do so. It should work towards the ideal but not let it be the enemy of the good or necessary.

Conclusion
Social Media Companies should use artificial intelligence to automate content moderation. The use of this technology is needed to meet SMCs’ substantive obligations to their users and owners. That means that the conditions under which it should be used are broad. Even if, AI moderation does not respect user’s moral agency, it should still be used. Nevertheless, where possible, SMCs should work to bring its AI moderation more in line with an ideal of respect. This Ideal AI Content Moderation would be transparent (capable of giving users the reasons which underpin the moderation decision) with an option to appeal to a human moderator (as a recognition of the two-sided nature of moral reasoning). Furthermore, the AI should operate on clear, prospective, and reasonably predictable rules such that users are given a freedom from domination and are spared from moderation happening like a ‘bolt-from-the-blue’.
AI moderation is a necessity for SMCs.
They should use AI moderation to meet their substantive obligations whilst striving for the procedural ideal.


Notes:

[1] Perhaps obliged.

[2] (Strawson, 1962).

[3] (Hirsch, 1993).

[4] (Jacobs, 2019, p. 29) (Seelmann, 2014).

[5] (Hart, 2008, pp. 5-6).

[6] (Edwards & Simester, 2014) (von Hirsch A. , 1992).

[7] (Edwards & Simester, 2014, p. 64).

[8] (Suzor & Etal, 2019).

[9] (Gillespie, 2020, p. 3).

[10] (von Hirsch & Hörnle, 1995).

[11] (Fuller, 1969, p. 39) (Simmonds, 2007, p. 64).

[12] (Simmonds, 2007, p. 101).

[13] (Gorwa & et.al, 2020, p. 11) (Burrell, 2016).

[14] (Kelion, 2020).

[15] (West, 2018).

[16] (Bennett, 2019, pp. 78-81).

[17] Ibid (p. 81).

[18] Ibid.

[19] (Friedman, 1970).

[20] There is thus an interesting question about how these obligations could apply to Twitter post Elon’s takeover.

[21] (LeCunn, 2020).

[22] (Gillespie, 2020, p. 4).

[23] Ibid.

[24] (McMahan, 2005) (Øverland, 2014).

Works Cited
Bennett, C. (2019). How Should We Argue for a Censure Theory of Punishment? In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 67-86). Hart Publishing.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 1-16.
Cohen-Almagor, R. (2015). Confronting the internet’s dark side: moral and social responsibility on the free highway. Cambridge: Cambridge University Press.
Edwards, J., & Simester, A. (2014). Prevention with a Moral Voice. In A. Simester, A. Du Bois-Pedain, & U. Neumann, Liberal Criminal Theory (pp. 43-65). Hart Publishing.
Friedman, M. (1970, September 13). The Social Responsibility of Business Is to Increase Its Profits. New York Times.
Fuller, L. (1969). The Morality of Law. New Haven: Yale University Press.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 1-5.
Gorwa, R., & et.al. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform goverance. Big Data & Society, 1-15.
Günther, K. (2014). Crime and Punishment as Communication. In A. du Bois-Pedain, A. Simester, & U. Neumann, Liberal Criminal Theory (pp. 123-140). Hart Publishing.
Hart, H. (2008). Punishment and Responsibility: Essays in the Philosophy of Law. Oxford University Press.
Hirsch, A. v. (1993). Censure and Sanctions. Oxford University Press.
Jacobs, J. (2019). Censure, Sanction and the Moral Psychology of Resentment. In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 19-40). Hart Publishing.
Kelion, L. (2020, September 24). Facebook ‘Supreme Court’ to begin work before US Presidential vote. Retrieved from BBC: https://www.bbc.co.uk/news/technology-54278788
LeCunn, Y. (2020, June). Deep learning, neural networks and the future of AI. (C. Anderson, Interviewer)
McMahan, J. (2005). Self-Defense and Culpability. Law and Philosophy, 751–774.
Øverland, G. (2014). Moral Obstacles: An Alternative to the Doctrine of Double Effect. Ethics, 481-506.
Seelmann, K. (2014). Does Punishment Honour the Offender? In A. Du Bois-Pedain, A. Simister, & U. Neumann, Liberal Criminal Theory (pp. 111-121). Hart Publishing.
Simmonds, N. (2007). Law as a Moral Idea. Oxford: Oxford University Press.
Strawson, P. (1962). Freedom and Resentment. Retrieved from UCL: https://www.ucl.ac.uk/~uctytho/dfwstrawson1.htm
Suzor, N. P., & etal. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commerical Content Moderation. International Journal of Communication, 1526-1543.
von Hirsch, A. (1992). Proportionalty in the Philosophy of Punishment. Crime and Justice, 16, 55-98.
von Hirsch, A., & Hӧrnle, T. (1995). Postive Generalpravention und Tadel. Goltdammer’s Archiv fur Strafrecht, 142.
West, S. M. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media and Society, 4366-4383.

  • Facebook
  • Twitter
  • Reddit

One Response to Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

  • Paul D. Van Pelt says:

    Another well-thought, HM winner piece, seems to me, though I will re-read it for clarity on several assertions/claims/etc. I like to see what recurring topics emerge on different blogs and which of them appear simultaneously. There are questions about AI/ML nearly everyday. Today, before reading here, I received comments on another post. The commenter made some good points on the matter of morality around AI and the dust up over an enthusiastic researcher’s claim regarding sentient AI. I won’t advertise the blog or the blogger. Shop around if you, like me, are curious about other perspectives. Many of us who have no immediate stake in AI or such research have concerns about the deep end of it. Unlike some who are involved in such research, we advocate error on the side of caution. It is good, of course, to have a roadmap of some sort when one wants to know where one is going. But, equally important, it is advisable to know you want to go there.

Recent Comments

Authors

Affiliations