The Unreasonable Volatility of Neural Machine Translation Models
Abstract
Recent works have shown that Neural Machine Translation (NMT) models achieve impressive performance, however, questions about understanding the behavior of these models remain unanswered. We investigate the unexpected volatility of NMT models where the input is semantically and syntactically correct. We discover that with trivial modifications of source sentences, we can identify cases where \textit{unexpected changes} happen in the translation and in the worst case lead to mistranslations. This volatile behavior of translating extremely similar sentences in surprisingly different ways highlights the underlying generalization problem of current NMT models. We find that both RNN and Transformer models display volatile behavior in 26% and 19% of sentence variations, respectively.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2020
- DOI:
- arXiv:
- arXiv:2005.12398
- Bibcode:
- 2020arXiv200512398F
- Keywords:
-
- Computer Science - Computation and Language
- E-Print:
- Accepted to Neural Generation and Translation Workshop (WNGT) at ACL 2020