Scalable Kernel KMeans Clustering with Nystrom Approximation: RelativeError Bounds
Abstract
Kernel $k$means clustering can correctly identify and extract a far more varied collection of cluster structures than the linear $k$means clustering algorithm. However, kernel $k$means clustering is computationally expensive when the nonlinear feature map is highdimensional and there are many input points. Kernel approximation, e.g., the Nyström method, has been applied in previous works to approximately solve kernel learning problems when both of the above conditions are present. This work analyzes the application of this paradigm to kernel $k$means clustering, and shows that applying the linear $k$means clustering algorithm to $\frac{k}{\epsilon} (1 + o(1))$ features constructed using a socalled rankrestricted Nyström approximation results in cluster assignments that satisfy a $1 + \epsilon$ approximation ratio in terms of the kernel $k$means cost function, relative to the guarantee provided by the same algorithm without the use of the Nyström method. As part of the analysis, this work establishes a novel $1 + \epsilon$ relativeerror trace norm guarantee for lowrank approximation using the rankrestricted Nyström approximation. Empirical evaluations on the $8.1$ million instance MNIST8M dataset demonstrate the scalability and usefulness of kernel $k$means clustering with Nyström approximation. This work argues that spectral clustering using Nyström approximationa popular and computationally efficient, but theoretically unsound approach to nonlinear clusteringshould be replaced with the efficient and theoretically sound combination of kernel $k$means clustering with Nyström approximation. The superior performance of the latter approach is empirically verified.
 Publication:

arXiv eprints
 Pub Date:
 June 2017
 arXiv:
 arXiv:1706.02803
 Bibcode:
 2017arXiv170602803W
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 Journal of Machine Learning Research 20 (2019) 149