Hedging using reinforcement learning: Contextual $k$Armed Bandit versus $Q$learning
Abstract
The construction of replication strategies for contingent claims in the presence of risk and market friction is a key problem of financial engineering. In real markets, continuous replication, such as in the model of Black, Scholes and Merton, is not only unrealistic but it is also undesirable due to high transaction costs. Over the last decades stochastic optimalcontrol methods have been developed to balance between effective replication and losses. More recently, with the rise of artificial intelligence, temporaldifference Reinforcement Learning, in particular variations of $Q$learning in conjunction with Deep Neural Networks, have attracted significant interest. From a practical point of view, however, such methods are often relatively sample inefficient, hard to train and lack performance guarantees. This motivates the investigation of a stable benchmark algorithm for hedging. In this article, the hedging problem is viewed as an instance of a riskaverse contextual $k$armed bandit problem, for which a large body of theoretical results and wellstudied algorithms are available. We find that the $k$armed bandit model naturally fits to the $P\&L$ formulation of hedging, providing for a more accurate and sample efficient approach than $Q$learning and reducing to the BlackScholes model in the absence of transaction costs and risks.
 Publication:

arXiv eprints
 Pub Date:
 July 2020
 arXiv:
 arXiv:2007.01623
 Bibcode:
 2020arXiv200701623C
 Keywords:

 Computer Science  Machine Learning;
 Quantitative Finance  Computational Finance;
 Statistics  Machine Learning
 EPrint:
 15 pages, 7 figures