Revisit Policy Optimization in Matrix Form
Abstract
In tabular case, when the reward and environment dynamics are known, policy evaluation can be written as $\bm{V}_{\bm{\pi}} = (I  \gamma P_{\bm{\pi}})^{1} \bm{r}_{\bm{\pi}}$, where $P_{\bm{\pi}}$ is the state transition matrix given policy ${\bm{\pi}}$ and $\bm{r}_{\bm{\pi}}$ is the reward signal given ${\bm{\pi}}$. What annoys us is that $P_{\bm{\pi}}$ and $\bm{r}_{\bm{\pi}}$ are both mixed with ${\bm{\pi}}$, which means every time when we update ${\bm{\pi}}$, they will change together. In this paper, we leverage the notation from \cite{wang2007dual} to disentangle ${\bm{\pi}}$ and environment dynamics which makes optimization over policy more straightforward. We show that policy gradient theorem \cite{sutton2018reinforcement} and TRPO \cite{schulman2015trust} can be put into a more general framework and such notation has good potential to be extended to modelbased reinforcement learning.
 Publication:

arXiv eprints
 Pub Date:
 September 2019
 DOI:
 10.48550/arXiv.1909.09186
 arXiv:
 arXiv:1909.09186
 Bibcode:
 2019arXiv190909186L
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Artificial Intelligence