Rows vs. Columns: Randomized Kaczmarz or GaussSeidel for Ridge Regression
Abstract
The Kaczmarz and GaussSeidel methods aim to solve a linear $m \times n$ system $\boldsymbol{X} \boldsymbol{\beta} = \boldsymbol{y}$ by iteratively refining the solution estimate; the former uses random rows of $\boldsymbol{X}$ {to update $\boldsymbol{\beta}$ given the corresponding equations} and the latter uses random columns of $\boldsymbol{X}$ {to update corresponding coordinates in $\boldsymbol{\beta}$}. Interest in these methods was recently revitalized by a proof of Strohmer and Vershynin showing linear convergence in expectation for a \textit{randomized} Kaczmarz method variant (RK), and a similar result for the randomized GaussSeidel algorithm (RGS) was later proved by Lewis and Leventhal. Recent work unified the analysis of these algorithms for the overcomplete and undercomplete systems, showing convergence to the ordinary least squares (OLS) solution and the minimum Euclidean norm solution respectively. This paper considers the natural followup to the OLS problem, ridge regression, which solves $(\boldsymbol{X}^* \boldsymbol{X} + \lambda \boldsymbol{I}) \boldsymbol{\beta} = \boldsymbol{X}^* \boldsymbol{y}$. We present particular variants of RK and RGS for solving this system and derive their convergence rates. We compare these to a recent proposal by Ivanov and Zhdanov to solve this system, that can be interpreted as randomly sampling both rows and columns, which we argue is often suboptimal. Instead, we claim that one should always use RGS (columns) when $m > n$ and RK (rows) when $m < n$. This difference in behavior is simply related to the minimum eigenvalue of two related positive semidefinite matrices, $\boldsymbol{X}^* \boldsymbol{X} + \lambda \boldsymbol{I}_n$ and $\boldsymbol{X} \boldsymbol{X}^* + \lambda \boldsymbol{I}_m$ when $m > n$ or $m < n$.
 Publication:

arXiv eprints
 Pub Date:
 July 2015
 arXiv:
 arXiv:1507.05844
 Bibcode:
 2015arXiv150705844H
 Keywords:

 Mathematics  Numerical Analysis