Approximating the Stationary HamiltonJacobiBellman Equation by Hierarchical Tensor Products
Abstract
We treat infinite horizon optimal control problems by solving the associated stationary HamiltonJacobiBellman (HJB) equation numerically, for computing the value function and an optimal feedback area law. The dynamical systems under consideration are spatial discretizations of nonlinear parabolic partial differential equations (PDE), which means that the HJB is suffering from the curse of dimensions. To overcome numerical infeasability we use lowrank hierarchical tensor product approximation, or treebased tensor formats, in particular tensor trains (TT tensors) and multipolynomials, since the resulting value function is expected to be smooth. To this end we reformulate the Policy Iteration algorithm as a linearization of HJB equations. The resulting linear hyperbolic PDE remains the computational bottleneck due to highdimensions. By the methods of characteristics it can be reformulated via the Koopman operator in the spirit of dynamic programming. We use a low rank tensor representation for approximation of the value function. The resulting operator equation is solved using highdimensional quadrature, e.g. Variational MonteCarlo methods. From the knowledge of the value function at computable samples $x_i$ we infer the function $ x \mapsto v (x)$. We investigate the convergence of this procedure. By controlling destabilized versions of viscous Burgers and Schloegl equations numerical evidences are given.
 Publication:

arXiv eprints
 Pub Date:
 November 2019
 arXiv:
 arXiv:1911.00279
 Bibcode:
 2019arXiv191100279O
 Keywords:

 Mathematics  Optimization and Control
 EPrint:
 Changed Thm. 3.5/6, More detailed Numerical Results, Typos