Greedy Criterion in Orthogonal Greedy Learning
Abstract
Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called "$\delta$-greedy threshold" for learning. Based on the new greedy criterion, we derive an adaptive termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Plenty of numerical experiments are provided to support that the new scheme can achieve almost optimal generalization performance, while requiring less computation than OGL.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2016
- DOI:
- 10.48550/arXiv.1604.05993
- arXiv:
- arXiv:1604.05993
- Bibcode:
- 2016arXiv160405993X
- Keywords:
-
- Computer Science - Machine Learning
- E-Print:
- 12 pages, 6 figures. arXiv admin note: text overlap with arXiv:1411.3553