State and Control PathDependent Stochastic ZeroSum Differential Games: Viscosity Solutions of PathDependent HamiltonJacobiIsaacs Equations
Abstract
In this paper, we consider state and control pathsdependent stochastic zerosum differential games, where the dynamics and the running cost include both the state and control paths of the players. Using the notion of nonanticipative strategies, we define lower and upper value functionals, which are functions of the initial state and control paths of the players. We prove that the value functionals satisfy the dynamic programming principle. The associated lower and upper HamiltonJacobiIsaacs (HJI) equations from the dynamic programming principle are state and control pathsdependent nonlinear secondorder partial differential equations. We apply the functional Itô calculus to prove that the lower and upper value functionals are viscosity solutions of (lower and upper) state and control pathsdependent HJI equations, where the notion of viscosity solutions is defined on a compact subset of an $\kappa$Hölder space introduced in \cite{Tang_DCD_2015}. For the state pathdependent case, the uniqueness of viscosity solutions and the Isaacs condition imply the existence of the game value, and under additional assumptions we prove the uniqueness of classical solutions for the state pathdependent HJI equations.
 Publication:

arXiv eprints
 Pub Date:
 November 2019
 arXiv:
 arXiv:1911.00315
 Bibcode:
 2019arXiv191100315M
 Keywords:

 Mathematics  Optimization and Control;
 49N70;
 49L20;
 49L25
 EPrint:
 29 pages