Robust Batch Policy Learning in Markov Decision Processes
Abstract
We study the offline datadriven sequential decision making problem in the framework of Markov decision process (MDP). In order to enhance the generalizability and adaptivity of the learned policy, we propose to evaluate each policy by a set of the average rewards with respect to distributions centered at the policy induced stationary distribution. Given a precollected dataset of multiple trajectories generated by some behavior policy, our goal is to learn a robust policy in a prespecified policy class that can maximize the smallest value of this set. Leveraging the theory of semiparametric statistics, we develop a statistically efficient policy learning method for estimating the de ned robust optimal policy. A rateoptimal regret bound up to a logarithmic factor is established in terms of total decision points in the dataset.
 Publication:

arXiv eprints
 Pub Date:
 November 2020
 arXiv:
 arXiv:2011.04185
 Bibcode:
 2020arXiv201104185Q
 Keywords:

 Mathematics  Statistics Theory;
 Computer Science  Machine Learning;
 Statistics  Machine Learning