Resolving content moderation dilemmas between free speech and harmful misinformation
Abstract
Content moderation of online speech is a moral minefield, especially when two key values come into conflict: upholding freedom of expression and preventing harm caused by misinformation. Currently, these decisions are made without any knowledge of how people would approach them. In our study, we systematically varied factors that could influence moral judgments and found that despite significant differences along political lines, most US citizens preferred quashing harmful misinformation over protecting free speech. Furthermore, people were more likely to remove posts and suspend accounts if the consequences of the misinformation were severe or if it was a repeated offense. Our results can inform the design of transparent, consistent rules for content moderation that the general public accepts as legitimate.
- Publication:
-
Proceedings of the National Academy of Science
- Pub Date:
- February 2023
- DOI:
- Bibcode:
- 2023PNAS..12010666K