Verification of indefinite-horizon POMDPs
Abstract
The verification problem in MDPs asks whether, for any policy resolving the nondeterminism, the probability that something bad happens is bounded by some given threshold. This verification problem is often overly pessimistic, as the policies it considers may depend on the complete system state. This paper considers the verification problem for partially observable MDPs, in which the policies make their decisions based on (the history of) the observations emitted by the system. We present an abstraction-refinement framework extending previous instantiations of the Lovejoy-approach. Our experiments show that this framework significantly improves the scalability of the approach.
- Publication:
-
arXiv e-prints
- Pub Date:
- June 2020
- arXiv:
- arXiv:2007.00102
- Bibcode:
- 2020arXiv200700102B
- Keywords:
-
- Computer Science - Artificial Intelligence;
- Computer Science - Logic in Computer Science
- E-Print:
- Technical report for ATVA 2020 paper with the same title