Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Trenton Andrew Sewell
Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.
This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners.Read More »Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?