A Discrete-Time Switching System Analysis of Q-learning
Abstract
This paper develops a novel control-theoretic framework to analyze the non-asymptotic convergence of Q-learning. We show that the dynamics of asynchronous Q-learning with a constant step-size can be naturally formulated as a discrete-time stochastic affine switching system. Moreover, the evolution of the Q-learning estimation error is over- and underestimated by trajectories of two simpler dynamical systems. Based on these two systems, we derive a new finite-time error bound of asynchronous Q-learning when a constant stepsize is used. Our analysis also sheds light on the overestimation phenomenon of Q-learning. We further illustrate and validate the analysis through numerical simulations.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2021
- arXiv:
- arXiv:2102.08583
- Bibcode:
- 2021arXiv210208583L
- Keywords:
-
- Mathematics - Optimization and Control;
- Computer Science - Artificial Intelligence