On Policy Gradients
Abstract
The goal of policy gradient approaches is to find a policy in a given class of policies which maximizes the expected return. Given a differentiable model of the policy, we want to apply a gradient-ascent technique to reach a local optimum. We mainly use gradient ascent, because it is theoretically well researched. The main issue is that the policy gradient with respect to the expected return is not available, thus we need to estimate it. As policy gradient algorithms also tend to require on-policy data for the gradient estimate, their biggest weakness is sample efficiency. For this reason, most research is focused on finding algorithms with improved sample efficiency. This paper provides a formal introduction to policy gradient that shows the development of policy gradient approaches, and should enable the reader to follow current research on the topic.
- Publication:
-
arXiv e-prints
- Pub Date:
- November 2019
- DOI:
- arXiv:
- arXiv:1911.04817
- Bibcode:
- 2019arXiv191104817K
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning