On the choice of graph neural network architectures
Abstract
Seminal works on graph neural networks have primarily targeted semi-supervised node classification problems with few observed labels and high-dimensional signals. With the development of graph networks, this setup has become a de facto benchmark for a significant body of research. Interestingly, several works have recently shown that in this particular setting, graph neural networks do not perform much better than predefined low-pass filters followed by a linear classifier. However, when learning from little data in a high-dimensional space, it is not surprising that simple and heavily regularized methods are near-optimal. In this paper, we show empirically that in settings with fewer features and more training data, more complex graph networks significantly outperform simple models, and propose a few insights towards the proper choice of graph network architectures. We finally outline the importance of using sufficiently diverse benchmarks (including lower dimensional signals as well) when designing and studying new types of graph neural networks.
- Publication:
-
arXiv e-prints
- Pub Date:
- November 2019
- DOI:
- 10.48550/arXiv.1911.05384
- arXiv:
- arXiv:1911.05384
- Bibcode:
- 2019arXiv191105384V
- Keywords:
-
- Computer Science - Social and Information Networks;
- Computer Science - Machine Learning;
- Electrical Engineering and Systems Science - Signal Processing;
- Statistics - Machine Learning
- E-Print:
- 5 pages, 1 figure, accepted at ICASSP 2020