We treat infinite horizon optimal control problems by solving the associated stationary Hamilton-Jacobi-Bellman (HJB) equation numerically, for computing the value function and an optimal feedback area law. The dynamical systems under consideration are spatial discretizations of nonlinear parabolic partial differential equations (PDE), which means that the HJB is suffering from the curse of dimensions. To overcome numerical infeasability we use low-rank hierarchical tensor product approximation, or tree-based tensor formats, in particular tensor trains (TT tensors) and multi-polynomials, since the resulting value function is expected to be smooth. To this end we reformulate the Policy Iteration algorithm as a linearization of HJB equations. The resulting linear hyperbolic PDE remains the computational bottleneck due to high-dimensions. By the methods of characteristics it can be reformulated via the Koopman operator in the spirit of dynamic programming. We use a low rank tensor representation for approximation of the value function. The resulting operator equation is solved using high-dimensional quadrature, e.g. Variational Monte-Carlo methods. From the knowledge of the value function at computable samples $x_i$ we infer the function $ x \mapsto v (x)$. We investigate the convergence of this procedure. By controlling destabilized versions of viscous Burgers and Schloegl equations numerical evidences are given.