Noregret Algorithms for Fair Resource Allocation
Abstract
We consider a fair resource allocation problem in the noregret setting against an unrestricted adversary. The objective is to allocate resources equitably among several agents in an online fashion so that the difference of the aggregate $\alpha$fair utilities of the agents between an optimal static clairvoyant allocation and that of the online policy grows sublinearly with time. The problem is challenging due to the nonadditive nature of the $\alpha$fairness function. Previously, it was shown that no online policy can exist for this problem with a sublinear standard regret. In this paper, we propose an efficient online resource allocation policy, called Online Proportional Fair (OPF), that achieves $c_\alpha$approximate sublinear regret with the approximation factor $c_\alpha=(1\alpha)^{(1\alpha)}\leq 1.445,$ for $0\leq \alpha < 1$. The upper bound to the $c_\alpha$regret for this problem exhibits a surprising phase transition phenomenon. The regret bound changes from a powerlaw to a constant at the critical exponent $\alpha=\frac{1}{2}.$ As a corollary, our result also resolves an open problem raised by EvenDar et al. [2009] on designing an efficient noregret policy for the online job scheduling problem in certain parameter regimes. The proof of our results introduces new algorithmic and analytical techniques, including greedy estimation of the future gradients for nonadditive global reward functions and bootstrapping adaptive regret bounds, which may be of independent interest.
 Publication:

arXiv eprints
 Pub Date:
 March 2023
 DOI:
 10.48550/arXiv.2303.06396
 arXiv:
 arXiv:2303.06396
 Bibcode:
 2023arXiv230306396S
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning