Selecting Near-Optimal Approximate State Representations in Reinforcement Learning
Abstract
We consider a reinforcement learning setting introduced in (Maillard et al., NIPS 2011) where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2014
- DOI:
- 10.48550/arXiv.1405.2652
- arXiv:
- arXiv:1405.2652
- Bibcode:
- 2014arXiv1405.2652O
- Keywords:
-
- Computer Science - Machine Learning