DeepWalking Backwards: From Embeddings Back to Graphs
Abstract
Lowdimensional node embeddings play a key role in analyzing graph datasets. However, little work studies exactly what information is encoded by popular embedding methods, and how this information correlates with performance in downstream machine learning tasks. We tackle this question by studying whether embeddings can be inverted to (approximately) recover the graph used to generate them. Focusing on a variant of the popular DeepWalk method (Perozzi et al., 2014; Qiu et al., 2018), we present algorithms for accurate embedding inversion  i.e., from the lowdimensional embedding of a graph G, we can find a graph H with a very similar embedding. We perform numerous experiments on realworld networks, observing that significant information about G, such as specific edges and bulk properties like triangle density, is often lost in H. However, community structure is often preserved or even enhanced. Our findings are a step towards a more rigorous understanding of exactly what information embeddings encode about the input graph, and why this information is useful for learning tasks.
 Publication:

arXiv eprints
 Pub Date:
 February 2021
 arXiv:
 arXiv:2102.08532
 Bibcode:
 2021arXiv210208532C
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Social and Information Networks