Delegative Reinforcement Learning: learning to avoid traps with a little help
Abstract
Most known regret bounds for reinforcement learning are either episodic or assume an environment without traps. We derive a regret bound without making either assumption, by allowing the algorithm to occasionally delegate an action to an external advisor. We thus arrive at a setting of active oneshot modelbased reinforcement learning that we call DRL (delegative reinforcement learning.) The algorithm we construct in order to demonstrate the regret bound is a variant of Posterior Sampling Reinforcement Learning supplemented by a subroutine that decides which actions should be delegated. The algorithm is not anytime, since the parameters must be adjusted according to the target time discount. Currently, our analysis is limited to Markov decision processes with finite numbers of hypotheses, states and actions.
 Publication:

arXiv eprints
 Pub Date:
 July 2019
 arXiv:
 arXiv:1907.08461
 Bibcode:
 2019arXiv190708461K
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning;
 68Q32;
 I.2.6
 EPrint:
 22 pages