Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Abstract
Pre-trained contextualized language models (PrLMs) have led to strong performance gains in downstream natural language understanding tasks. However, PrLMs can still be easily fooled by adversarial word substitution, which is one of the most challenging textual adversarial attack methods. Existing defence approaches suffer from notable performance loss and complexities. Thus, this paper presents a compact and performance-preserved framework, Anomaly Detection with Frequency-Aware Randomization (ADFAR). In detail, we design an auxiliary anomaly detection classifier and adopt a multi-task learning procedure, by which PrLMs are able to distinguish adversarial input samples. Then, in order to defend adversarial word substitution, a frequency-aware randomization process is applied to those recognized adversarial input samples. Empirical results show that ADFAR significantly outperforms those newly proposed defense methods over various tasks with much higher inference speed. Remarkably, ADFAR does not impair the overall performance of PrLMs. The code is available at https://github.com/LilyNLP/ADFAR
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2021
- DOI:
- 10.48550/arXiv.2105.14553
- arXiv:
- arXiv:2105.14553
- Bibcode:
- 2021arXiv210514553B
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- Findings of ACL: ACL 2021