Zap QLearning for Optimal Stopping Time Problems
Abstract
The objective in this paper is to obtain fast converging reinforcement learning algorithms to approximate solutions to the problem of discounted cost optimal stopping in an irreducible, uniformly ergodic Markov chain, evolving on a compact subset of $\mathbb{R}^n$. We build on the dynamic programming approach taken by Tsitsikilis and Van Roy, wherein they propose a Qlearning algorithm to estimate the optimal stateaction value function, which then defines an optimal stopping rule. We provide insights as to why the convergence rate of this algorithm can be slow, and propose a fastconverging alternative, the "ZapQlearning" algorithm, designed to achieve optimal rate of convergence. For the first time, we prove the convergence of the ZapQlearning algorithm under the assumption of linear function approximation setting. We use ODE analysis for the proof, and the optimal asymptotic variance property of the algorithm is reflected via fast convergence in a finance example.
 Publication:

arXiv eprints
 Pub Date:
 April 2019
 arXiv:
 arXiv:1904.11538
 Bibcode:
 2019arXiv190411538C
 Keywords:

 Electrical Engineering and Systems Science  Systems and Control;
 Computer Science  Machine Learning