The success of neural networks across most machine learning tasks and the persistence of adversarial examples have made the verification of such models an important quest. Several techniques have been successfully developed to verify robustness, and are now able to evaluate neural networks with thousands of nodes. The main weakness of this approach is in the specification: robustness is asserted on a validation set consisting of a finite set of examples, i.e. locally. We propose a notion of global robustness based on generative models, which asserts the robustness on a very large and representative set of examples. We show how this can be used for verifying neural networks. In this paper we experimentally explore the merits of this approach, and show how it can be used to construct realistic adversarial examples.
- Pub Date:
- October 2019
- Computer Science - Machine Learning;
- Computer Science - Formal Languages and Automata Theory;
- Computer Science - Neural and Evolutionary Computing;
- Statistics - Machine Learning
- A preliminary version was presented at the VNN Symposium (Verification of Neural Networks) in Stanford, 2019