content moderation
Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Trenton Andrew Sewell
Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.
This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners. Continue reading →
Recent Posts
- Event Summary: New St Cross Special Ethics Seminar: Should people have indefinite lifespans? Ethical and social considerations in life-extension, Professor João Pedro de Magalhães
- On Grief and Griefbots
- Is Animal Liberation Speciesist?
- Cross-post: Fairness and Freedom in Public Health Policy – On the need for a Humanities-based approach to public health policy
- Playing the Game of Faces with AI
Recent Comments