Robust Audio Adversarial Example for a Physical Attack
Abstract
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not able to perform such a physical attack because of reverberation and noise from playback environments. In contrast, our method obtains robust adversarial examples by simulating transformations caused by playback or recording in the physical world and incorporating the transformations into the generation process. Evaluation and a listening experiment demonstrated that our adversarial examples are able to attack without being noticed by humans. This result suggests that audio adversarial examples generated by the proposed method may become a real threat.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2018
- DOI:
- arXiv:
- arXiv:1810.11793
- Bibcode:
- 2018arXiv181011793Y
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Cryptography and Security;
- Computer Science - Sound;
- Electrical Engineering and Systems Science - Audio and Speech Processing;
- Statistics - Machine Learning
- E-Print:
- Accepted to IJCAI 2019