Private Convex Optimization in General Norms
Abstract
We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $\\cdot\$. Our algorithms are based on a regularized exponential mechanism which samples from the density $\propto \exp(k(F+\mu r))$ where $F$ is the empirical loss and $r$ is a regularizer which is strongly convex with respect to $\\cdot\$, generalizing a recent work of [Gopi, Lee, Liu '22] to nonEuclidean settings. We show that this mechanism satisfies Gaussian differential privacy and solves both DPERM (empirical risk minimization) and DPSCO (stochastic convex optimization) by using localization tools from convex geometry. Our framework is the first to apply to private convex optimization in general normed spaces and directly recovers nonprivate SCO rates achieved by mirror descent as the privacy parameter $\epsilon \to \infty$. As applications, for Lipschitz optimization in $\ell_p$ norms for all $p \in (1, 2)$, we obtain the first optimal privacyutility tradeoffs; for $p = 1$, we improve tradeoffs obtained by the recent works [Asi, Feldman, Koren, Talwar '21, Bassily, Guzman, Nandi '21] by at least a logarithmic factor. Our $\ell_p$ norm and Schatten$p$ norm optimization frameworks are complemented with polynomialtime samplers whose query complexity we explicitly bound.
 Publication:

arXiv eprints
 Pub Date:
 July 2022
 DOI:
 10.48550/arXiv.2207.08347
 arXiv:
 arXiv:2207.08347
 Bibcode:
 2022arXiv220708347G
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Cryptography and Security;
 Mathematics  Optimization and Control;
 Statistics  Machine Learning
 EPrint:
 SODA 2023