Essentially Sharp Estimates on the Entropy Regularization Error in Discrete Discounted Markov Decision Processes
Abstract
We study the error introduced by entropy regularization of infinite-horizon discrete discounted Markov decision processes. We show that this error decreases exponentially in the inverse regularization strength both in a weighted KL-divergence and in value with a problem-specific exponent. We provide a lower bound matching our upper bound up to a polynomial factor. Our proof relies on the correspondence of the solutions of entropy-regularized Markov decision processes with gradient flows of the unregularized reward with respect to a Riemannian metric common in natural policy gradient methods. Further, this correspondence allows us to identify the limit of the gradient flow as the generalized maximum entropy optimal policy, thereby characterizing the implicit bias of the Kakade gradient flow which corresponds to a time-continuous version of the natural policy gradient method. We use this to show that for entropy-regularized natural policy gradient methods the overall error decays exponentially in the square root of the number of iterations improving existing sublinear guarantees.
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2024
- DOI:
- 10.48550/arXiv.2406.04163
- arXiv:
- arXiv:2406.04163
- Bibcode:
- 2024arXiv240604163M
- Keywords:
-
- Mathematics - Optimization and Control;
- Computer Science - Machine Learning;
- Electrical Engineering and Systems Science - Systems and Control;
- 37N40;
- 65K05;
- 90C05;
- 90C40;
- 90C53
- E-Print:
- 26 pages, 1 figure