Graph Selfsupervised Learning with Accurate Discrepancy Learning
Abstract
Selfsupervised learning of graph neural networks (GNNs) aims to learn an accurate representation of the graphs in an unsupervised manner, to obtain transferable representations of them for diverse downstream tasks. Predictive learning and contrastive learning are the two most prevalent approaches for graph selfsupervised learning. However, they have their own drawbacks. While the predictive learning methods can learn the contextual relationships between neighboring nodes and edges, they cannot learn global graphlevel similarities. Contrastive learning, while it can learn global graphlevel similarities, its objective to maximize the similarity between two differently perturbed graphs may result in representations that cannot discriminate two similar graphs with different properties. To tackle such limitations, we propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancybased Selfsupervised LeArning (DSLA). Specifically, we create multiple perturbations of the given graph with varying degrees of similarity, and train the model to predict whether each graph is the original graph or the perturbed one. Moreover, we further aim to accurately capture the amount of discrepancy for each perturbed graph using the graph edit distance. We validate our DSLA on various graphrelated downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which ours largely outperforms relevant baselines.
 Publication:

arXiv eprints
 Pub Date:
 February 2022
 arXiv:
 arXiv:2202.02989
 Bibcode:
 2022arXiv220202989K
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Artificial Intelligence
 EPrint:
 NeurIPS 2022