Double Deep QLearning for Optimal Execution
Abstract
Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep QLearning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Qvalue function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median outperformance, (ii) probability of outperformance, and (iii) gainloss ratios.
 Publication:

arXiv eprints
 Pub Date:
 December 2018
 arXiv:
 arXiv:1812.06600
 Bibcode:
 2018arXiv181206600N
 Keywords:

 Quantitative Finance  Trading and Market Microstructure;
 Computer Science  Machine Learning;
 Quantitative Finance  Computational Finance;
 Statistics  Machine Learning;
 91G99;
 93E35
 EPrint:
 20 pages, 7 figures, 1 table. Updated minor typos