ParMAC: distributed optimisation of nested functions, with application to learning binary autoencoders
Abstract
Many powerful machine learning models are based on the composition of multiple processing layers, such as deep nets, which gives rise to nonconvex objective functions. A general, recent approach to optimise such "nested" functions is the method of auxiliary coordinates (MAC). MAC introduces an auxiliary coordinate for each data point in order to decouple the nested model into independent submodels. This decomposes the optimisation into steps that alternate between training single layers and updating the coordinates. It has the advantage that it reuses existing singlelayer algorithms, introduces parallelism, and does not need to use chainrule gradients, so it works with nondifferentiable layers. With largescale problems, or when distributing the computation is necessary for faster training, the dataset may not fit in a single machine. It is then essential to limit the amount of communication between machines so it does not obliterate the benefit of parallelism. We describe a general way to achieve this, ParMAC. ParMAC works on a cluster of processing machines with a circular topology and alternates two steps until convergence: one step trains the submodels in parallel using stochastic updates, and the other trains the coordinates in parallel. Only submodel parameters, no data or coordinates, are ever communicated between machines. ParMAC exhibits high parallelism, low communication overhead, and facilitates data shuffling, load balancing, fault tolerance and streaming data processing. We study the convergence of ParMAC and propose a theoretical model of its runtime and parallel speedup. We develop ParMAC to learn binary autoencoders for fast, approximate image retrieval. We implement it in MPI in a distributed system and demonstrate nearly perfect speedups in a 128processor cluster with a training set of 100 million highdimensional points.
 Publication:

arXiv eprints
 Pub Date:
 May 2016
 arXiv:
 arXiv:1605.09114
 Bibcode:
 2016arXiv160509114C
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Distributed;
 Parallel;
 and Cluster Computing;
 Computer Science  Neural and Evolutionary Computing;
 Mathematics  Optimization and Control;
 Statistics  Machine Learning
 EPrint:
 40 pages, 13 figures. The abstract appearing here is slightly shorter than the one in the PDF file because of the arXiv's limitation of the abstract field to 1920 characters