Lower Bounds for Learning in Revealing POMDPs
Abstract
This paper studies the fundamental limits of reinforcement learning (RL) in the challenging \emph{partially observable} setting. While it is wellestablished that learning in Partially Observable Markov Decision Processes (POMDPs) requires exponentially many samples in the worst case, a surge of recent work shows that polynomial sample complexities are achievable under the \emph{revealing condition}  A natural condition that requires the observables to reveal some information about the unobserved latent states. However, the fundamental limits for learning in revealing POMDPs are much less understood, with existing lower bounds being rather preliminary and having substantial gaps from the current best upper bounds. We establish strong PAC and regret lower bounds for learning in revealing POMDPs. Our lower bounds scale polynomially in all relevant problem parameters in a multiplicative fashion, and achieve significantly smaller gaps against the current best upper bounds, providing a solid starting point for future studies. In particular, for \emph{multistep} revealing POMDPs, we show that (1) the latent statespace dependence is at least $\Omega(S^{1.5})$ in the PAC sample complexity, which is notably harder than the $\widetilde{\Theta}(S)$ scaling for fullyobservable MDPs; (2) Any polynomial sublinear regret is at least $\Omega(T^{2/3})$, suggesting its fundamental difference from the \emph{singlestep} case where $\widetilde{O}(\sqrt{T})$ regret is achievable. Technically, our hard instance construction adapts techniques in \emph{distribution testing}, which is new to the RL literature and may be of independent interest.
 Publication:

arXiv eprints
 Pub Date:
 February 2023
 DOI:
 10.48550/arXiv.2302.01333
 arXiv:
 arXiv:2302.01333
 Bibcode:
 2023arXiv230201333C
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Information Theory;
 Mathematics  Statistics Theory;
 Statistics  Machine Learning