Dynamics of Supervised Learning with Restricted Training Sets
Abstract
We study the dynamics of supervised learning in layered neural networks, in the regime where the size $p$ of the training set is proportional to the number $N$ of inputs. Here the local fields are no longer described by Gaussian probability distributions. We show how dynamical replica theory can be used to predict the evolution of macroscopic observables, including the relevant performance measures, incorporating the old formalism in the limit $\alpha=p/N\to\infty$ as a special case. For simplicity we restrict ourselves to single-layer networks and realizable tasks.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 1998
- DOI:
- 10.48550/arXiv.cond-mat/9803062
- arXiv:
- arXiv:cond-mat/9803062
- Bibcode:
- 1998cond.mat..3062C
- Keywords:
-
- Condensed Matter - Disordered Systems and Neural Networks
- E-Print:
- 36 pages, latex2e, 12 eps figures (to be publ in: Proc Newton Inst Workshop on On-Line Learning '97)