Efficient Exploration through Bayesian Deep QNetworks
Abstract
We study reinforcement learning (RL) in high dimensional episodic Markov decision processes (MDP). We consider valuebased RL when the optimal Qvalue is a linear function of ddimensional stateaction feature representation. For instance, in deepQ networks (DQN), the Qvalue is a linear function of the feature representation layer (output layer). We propose two algorithms, one based on optimism, LINUCB, and another based on posterior sampling, LINPSRL. We guarantee frequentist and Bayesian regret upper bounds of O(d sqrt{T}) for these two algorithms, where T is the number of episodes. We extend these methods to deep RL and propose Bayesian deep Qnetworks (BDQN), which uses an efficient Thompson sampling algorithm for high dimensional RL. We deploy the double DQN (DDQN) approach, and instead of learning the last layer of Qnetwork using linear regression, we use Bayesian linear regression, resulting in an approximated posterior over Qfunction. This allows us to directly incorporate the uncertainty over the Qfunction and deploy Thompson sampling on the learned posterior distribution resulting in efficient exploration/exploitation tradeoff. We empirically study the behavior of BDQN on a wide range of Atari games. Since BDQN carries out more efficient exploration and exploitation, it is able to reach higher return substantially faster compared to DDQN.
 Publication:

arXiv eprints
 Pub Date:
 February 2018
 DOI:
 10.48550/arXiv.1802.04412
 arXiv:
 arXiv:1802.04412
 Bibcode:
 2018arXiv180204412A
 Keywords:

 Computer Science  Artificial Intelligence;
 Computer Science  Machine Learning;
 Statistics  Machine Learning