Robust Sparse Mean Estimation via Sum of Squares
Abstract
We study the problem of highdimensional sparse mean estimation in the presence of an $\epsilon$fraction of adversarial outliers. Prior work obtained sample and computationally efficient algorithms for this task for identitycovariance subgaussian distributions. In this work, we develop the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance. For distributions on $\mathbb R^d$ with "certifiably bounded" $t$th moments and sufficiently light tails, our algorithm achieves error of $O(\epsilon^{11/t})$ with sample complexity $m = (k\log(d))^{O(t)}/\epsilon^{22/t}$. For the special case of the Gaussian distribution, our algorithm achieves nearoptimal error of $\tilde O(\epsilon)$ with sample complexity $m = O(k^4 \mathrm{polylog}(d))/\epsilon^2$. Our algorithms follow the SumofSquares based, proofs to algorithms approach. We complement our upper bounds with Statistical Query and lowdegree polynomial testing lower bounds, providing evidence that the sampletimeerror tradeoffs achieved by our algorithms are qualitatively the best possible.
 Publication:

arXiv eprints
 Pub Date:
 June 2022
 arXiv:
 arXiv:2206.03441
 Bibcode:
 2022arXiv220603441D
 Keywords:

 Computer Science  Data Structures and Algorithms;
 Computer Science  Machine Learning;
 Mathematics  Statistics Theory;
 Statistics  Machine Learning
 EPrint:
 To appear in COLT 2022