Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Abstract
Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like "gay" or "black" are used in offensive or prejudiced ways. Such biases manifest in false positives when these identifiers are present, due to models' inability to learn the contexts which constitute a hateful usage of identifiers. We extract SOC post-hoc explanations from fine-tuned BERT classifiers to efficiently detect bias towards identity terms. Then, we propose a novel regularization technique based on these explanations that encourages models to learn from the context of group identifiers in addition to the identifiers themselves. Our approach improved over baselines in limiting false positives on out-of-domain data while maintaining or improving in-domain performance. Project page: https://inklab.usc.edu/contextualize-hate-speech/.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2020
- DOI:
- 10.48550/arXiv.2005.02439
- arXiv:
- arXiv:2005.02439
- Bibcode:
- 2020arXiv200502439K
- Keywords:
-
- Computer Science - Computation and Language;
- Computer Science - Information Retrieval;
- Computer Science - Machine Learning
- E-Print:
- To appear in Proceedings of the 2020 Annual Conference of the Association for Computational Linguistics