Teaching Deep Convolutional Neural Networks to Play Go
Abstract
Mastering the game of Go has remained a long standing challenge to the field of AI. Modern computer Go systems rely on processing millions of possible future positions to play well, but intuitively a stronger and more 'humanlike' way to play the game would be to rely on pattern recognition abilities rather then brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to 'hard code' symmetries that are expect to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction programs have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go, indicating it is state of the art among programs that do not use Monte Carlo Tree Search. It is also able to win some games against state of the art Go playing program Fuego while using a fraction of the play time. This success at playing Go indicates high level principles of the game were learned.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2014
- DOI:
- 10.48550/arXiv.1412.3409
- arXiv:
- arXiv:1412.3409
- Bibcode:
- 2014arXiv1412.3409C
- Keywords:
-
- Computer Science - Artificial Intelligence;
- Computer Science - Machine Learning;
- Computer Science - Neural and Evolutionary Computing
- E-Print:
- 9 pages, 8 figures, 5 tables. Corrected typos, minor adjustment to table format