content moderation
Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Trenton Andrew Sewell
Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.
This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners. Continue reading →
Recent Posts
- How Brain-to-Brain Interfaces Will Make Things Difficult for Us
- Video Interview: Introducing Dr Emma Dore Horgan
- Prof Matthias Braun discussing the value of academic collaboration
- Video Interview: Introducing Oxford Uehiro Centre’s Academic Visitor, Prof Dr Matthias Braun
- Oxford Uehiro Prize in Practical Ethics: Turning up the Hedonic Treadmill: Is It Morally Impermissible for Parents to Give Their Children a Luxurious Standard of Living?
Popular posts
- If you’re a Conservative, I’m not your friend
- What if schizophrenics really are possessed by demons, after all?
- 7 reasons not to feel bad about yourself when you have acted immorally
- Rethinking ‘Higher’ and ‘Lower’ Pleasures
- Oxford Uehiro Prize in Practical Ethics: When is Sex With Conjoined Twins Permissible?
Recent Comments