Steepest Ascent Training of Support Vector Regressors
Abstract
In this paper, we propose a new method for training L1 (the linear sum of slack variables) and L2 (the square sum of slack variables) support vector regressors, which is based on the steepest ascent method previously developed for pattern classification. We test our method using two benchmark data sets and show that training of L2 support vector regressors is faster than that of L1 support vector regressors when polynomial and dot product kernels are used and that the working set selection by the exact Karush-Kuhn-Tucker (KKT) conditions does not always result in faster training than by the inexact KKT conditions.
- Publication:
-
IEEJ Transactions on Electronics, Information and Systems
- Pub Date:
- 2004
- DOI:
- Bibcode:
- 2004ITEIS.124.2064H
- Keywords:
-
- support vector machines;
- function approximation;
- Cholesky decomposition;
- quadratic programming