Scaleinvariant unconstrained online learning
Abstract
We consider a variant of online convex optimization in which both the instances (input vectors) and the comparator (weight vector) are unconstrained. We exploit a natural scale invariance symmetry in our unconstrained setting: the predictions of the optimal comparator are invariant under any linear transformation of the instances. Our goal is to design online algorithms which also enjoy this property, i.e. are scaleinvariant. We start with the case of coordinatewise invariance, in which the individual coordinates (features) can be arbitrarily rescaled. We give an algorithm, which achieves essentially optimal regret bound in this setup, expressed by means of a coordinatewise scaleinvariant norm of the comparator. We then study general invariance with respect to arbitrary linear transformations. We first give a negative result, showing that no algorithm can achieve a meaningful bound in terms of scaleinvariant norm of the comparator in the worst case. Next, we compliment this result with a positive one, providing an algorithm which "almost" achieves the desired bound, incurring only a logarithmic overhead in terms of the norm of the instances.
 Publication:

arXiv eprints
 Pub Date:
 August 2017
 arXiv:
 arXiv:1708.07042
 Bibcode:
 2017arXiv170807042K
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 To appear in Proc. of the 28th International Conference on Algorithmic Learning Theory (ALT) 2017