Average-Cost Optimality Results for Borel-Space Markov Decision Processes with Universally Measurable Policies
Abstract
We consider discrete-time Markov Decision Processes with Borel state and action spaces and universally measurable policies. For several long-run average cost criteria, we establish the following optimality results: the optimal average cost functions are lower semianalytic, there exist universally measurable semi-Markov or history-dependent $\epsilon$-optimal policies, and similar results hold for the minimum average costs achievable by Markov or stationary policies. We then analyze the structure of the optimal average cost functions, proving sufficient conditions for them to be constant almost everywhere with respect to certain $\sigma$-finite measures. The most important condition here is that each subset of states with positive measure be reachable with probability one under some policy. We obtain our results by exploiting an inequality for the optimal average cost functions and its connection with submartingales, and, in a special case that involves stationary policies, also by using the theory of recurrent Markov chains.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2021
- DOI:
- 10.48550/arXiv.2104.00181
- arXiv:
- arXiv:2104.00181
- Bibcode:
- 2021arXiv210400181Y
- Keywords:
-
- Mathematics - Optimization and Control;
- 90C40;
- 93E20
- E-Print:
- 36 pages