ChanceConstrained Stochastic Optimal Control via Path Integral and Finite Difference Methods
Abstract
This paper addresses a continuoustime continuousspace chanceconstrained stochastic optimal control (SOC) problem via a HamiltonJacobiBellman (HJB) partial differential equation (PDE). Through Lagrangian relaxation, we convert the chanceconstrained (riskconstrained) SOC problem to a riskminimizing SOC problem, the cost function of which possesses the timeadditive Bellman structure. We show that the riskminimizing control synthesis is equivalent to solving an HJB PDE whose boundary condition can be tuned appropriately to achieve a desired level of safety. Furthermore, it is shown that the proposed riskminimizing control problem can be viewed as a generalization of the problem of estimating the risk associated with a given control policy. Two numerical techniques are explored, namely the path integral and the finite difference method (FDM), to solve a class of riskminimizing SOC problems whose associated HJB equation is linearizable via the ColeHopf transformation. Using a 2D robot navigation example, we validate the proposed control synthesis framework and compare the solutions obtained using path integral and FDM.
 Publication:

arXiv eprints
 Pub Date:
 May 2022
 DOI:
 10.48550/arXiv.2205.00628
 arXiv:
 arXiv:2205.00628
 Bibcode:
 2022arXiv220500628P
 Keywords:

 Mathematics  Optimization and Control;
 Computer Science  Robotics;
 Electrical Engineering and Systems Science  Systems and Control