Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies
Abstract
Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. This vulnerability has arisen tremendous concerns for adapting GNNs in safety-critical applications and has attracted increasing research attention in recent years. Thus, it is necessary and timely to provide a comprehensive overview of existing graph adversarial attacks and the countermeasures. In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods. Furthermore, we have developed a repository with representative algorithms (https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph). The repository enables us to conduct empirical studies to deepen our understandings on attacks and defenses on graphs.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2020
- DOI:
- 10.48550/arXiv.2003.00653
- arXiv:
- arXiv:2003.00653
- Bibcode:
- 2020arXiv200300653J
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Cryptography and Security;
- Statistics - Machine Learning
- E-Print:
- Accepted by SIGKDD Explorations