Exploring Local Norms in Exp-concave Statistical Learning
Abstract
We consider the problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class. Answering a question raised in several prior works, we provide a $O( d / n + \log( 1 / \delta) / n )$ excess risk bound valid for a wide class of bounded exp-concave losses, where $d$ is the dimension of the convex reference set, $n$ is the sample size, and $\delta$ is the confidence level. Our result is based on a unified geometric assumption on the gradient of losses and the notion of local norms.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2023
- DOI:
- 10.48550/arXiv.2302.10726
- arXiv:
- arXiv:2302.10726
- Bibcode:
- 2023arXiv230210726P
- Keywords:
-
- Computer Science - Machine Learning;
- Mathematics - Statistics Theory;
- Statistics - Machine Learning
- E-Print:
- Accepted for presentation at the Conference on Learning Theory (COLT) 2023