PAC Bounds for Discounted MDPs
Abstract
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (MDPs). For the upper bound we make the assumption that each action leads to at most two possible next-states and prove a new bound for a UCRL-style algorithm on the number of time-steps when it is not Probably Approximately Correct (PAC). The new lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2012
- DOI:
- 10.48550/arXiv.1202.3890
- arXiv:
- arXiv:1202.3890
- Bibcode:
- 2012arXiv1202.3890L
- Keywords:
-
- Computer Science - Machine Learning
- E-Print:
- 25 LaTeX pages