The Concept of Criticality in Reinforcement Learning
Abstract
Reinforcement learning methods carry a well known biasvariance tradeoff in nstep algorithms for optimal control. Unfortunately, this has rarely been addressed in current research. This tradeoff principle holds independent of the choice of the algorithm, such as nstep SARSA, nstep Expected SARSA or nstep Tree backup. A small n results in a large bias, while a large n leads to large variance. The literature offers no straightforward recipe for the best choice of this value. While currently all nstep algorithms use a fixed value of n over the state space we extend the framework of nstep updates by allowing each state to have its specific n. We propose a solution to this problem within the context of human aided reinforcement learning. Our approach is based on the observation that a human can learn more efficiently if she receives input regarding the criticality of a given state and thus the amount of attention she needs to invest into the learning in that state. This observation is related to the idea that each state of the MDP has a certain measure of criticality which indicates how much the choice of the action in that state influences the return. In our algorithm the RL agent utilizes the criticality measure, a function provided by a human trainer, in order to locally choose the best stepnumber n for the update of the Q function.
 Publication:

arXiv eprints
 Pub Date:
 October 2018
 arXiv:
 arXiv:1810.07254
 Bibcode:
 2018arXiv181007254S
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Artificial Intelligence;
 Computer Science  Multiagent Systems;
 Statistics  Machine Learning