Unsupervised prototype learning in an associative-memory network
Abstract
Unsupervised learning in a generalized Hopfield associative-memory network is investigated in this work. First, we prove that the (generalized) Hopfield model is equivalent to a semi-restricted Boltzmann machine with a layer of visible neurons and another layer of hidden binary neurons, so it could serve as the building block for a multilayered deep-learning system. We then demonstrate that the Hopfield network can learn to form a faithful internal representation of the observed samples, with the learned memory patterns being prototypes of the input data. Furthermore, we propose a spectral method to extract a small set of concepts (idealized prototypes) as the most concise summary or abstraction of the empirical data.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2017
- DOI:
- 10.48550/arXiv.1704.02848
- arXiv:
- arXiv:1704.02848
- Bibcode:
- 2017arXiv170402848Z
- Keywords:
-
- Computer Science - Neural and Evolutionary Computing;
- Condensed Matter - Disordered Systems and Neural Networks;
- Computer Science - Machine Learning
- E-Print:
- We found serious inconsistence between the numerical protocol described in the text and the actual numerical code used by the first author to produce the data. Because of this inconsistence, we decide to withdraw the preprint. The corresponding author (Hai-Jun Zhou) deeply apologizes for not being able to detect this inconsistence earlier