We compute the singular values of an $m \times n$ sparse matrix $A$ in a distributed setting, without communication dependence on $m$, which is useful for very large $m$. In particular, we give a simple nonadaptive sampling scheme where the singular values of $A$ are estimated within relative error with constant probability. Our proven bounds focus on the MapReduce framework, which has become the de facto tool for handling such large matrices that cannot be stored or even streamed through a single machine. On the way, we give a general method to compute $A^TA$. We preserve singular values of $A^TA$ with $\epsilon$ relative error with shuffle size $O(n^2/\epsilon^2)$ and reduce-key complexity $O(n/\epsilon^2)$. We further show that if only specific entries of $A^TA$ are required and $A$ has nonnegative entries, then we can reduce the shuffle size to $O(n \log(n) / s)$ and reduce-key complexity to $O(\log(n)/s)$, where $s$ is the minimum cosine similarity for the entries being estimated. All of our bounds are independent of $m$, the larger dimension. We provide open-source implementations in Spark and Scalding, along with experiments in an industrial setting.
- Pub Date:
- April 2013
- Computer Science - Data Structures and Algorithms;
- Computer Science - Distributed;
- and Cluster Computing;
- Mathematics - Spectral Theory
- arXiv admin note: text overlap with arXiv:1206.2082