Convex operator-theoretic methods in stochastic control
Abstract
This paper is about operator-theoretic methods for solving nonlinear stochastic optimal control problems to global optimality. These methods leverage on the convex duality between optimally controlled diffusion processes and Hamilton-Jacobi-Bellman (HJB) equations for nonlinear systems in an ergodic Hilbert-Sobolev space. In detail, a generalized Bakry-Emery condition is introduced under which one can establish the global exponential stabilizability of a large class of nonlinear systems. It is shown that this condition is sufficient to ensure the existence of solutions of the ergodic HJB for stochastic optimal control problems on infinite time horizons. Moreover, a novel dynamic programming recursion for bounded linear operators is introduced, which can be used to numerically solve HJB equations by a Galerkin projection.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2023
- DOI:
- 10.48550/arXiv.2305.17628
- arXiv:
- arXiv:2305.17628
- Bibcode:
- 2023arXiv230517628H
- Keywords:
-
- Mathematics - Optimization and Control