Randomized Exploration for Reinforcement Learning with General Value Function Approximation
Abstract
We propose a modelfree reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle. Unlike existing upperconfidencebound (UCB) based approaches, which are often computationally intractable, our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises. To attain optimistic value function estimation without resorting to a UCBstyle bonus, we introduce an optimistic reward sampling procedure. When the value functions can be represented by a function class $\mathcal{F}$, our algorithm achieves a worstcase regret bound of $\widetilde{O}(\mathrm{poly}(d_EH)\sqrt{T})$ where $T$ is the time elapsed, $H$ is the planning horizon and $d_E$ is the $\textit{eluder dimension}$ of $\mathcal{F}$. In the linear setting, our algorithm reduces to LSVIPHE, a variant of RLSVI, that enjoys an $\widetilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret. We complement the theory with an empirical evaluation across known difficult exploration tasks.
 Publication:

arXiv eprints
 Pub Date:
 June 2021
 arXiv:
 arXiv:2106.07841
 Bibcode:
 2021arXiv210607841I
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 32 page, 5 figures, in Proceedings of the 38th International Conference on Machine Learning, PMLR 139, 2021