Inducing Language Networks from Continuous Space Word Representations
Abstract
Recent advancements in unsupervised feature learning have developed powerful latent representations of words. However, it is still not clear what makes one representation better than another and how we can learn the ideal representation. Understanding the structure of latent spaces attained is key to any future advancement in unsupervised learning. In this work, we introduce a new view of continuous space word representations as language networks. We explore two techniques to create language networks from learned features by inducing them for two popular word representation methods and examining the properties of their resulting networks. We find that the induced networks differ from other methods of creating language networks, and that they contain meaningful community structure.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2014
- DOI:
- 10.48550/arXiv.1403.1252
- arXiv:
- arXiv:1403.1252
- Bibcode:
- 2014arXiv1403.1252P
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Computation and Language;
- Computer Science - Social and Information Networks
- E-Print:
- 14 pages