Transformations between deep neural networks
Abstract
We propose to test, and when possible establish, an equivalence between two different artificial neural networks by attempting to construct a datadriven transformation between them, using manifoldlearning techniques. In particular, we employ diffusion maps with a Mahalanobislike metric. If the construction succeeds, the two networks can be thought of as belonging to the same equivalence class. We first discuss transformation functions between only the outputs of the two networks; we then also consider transformations that take into account outputs (activations) of a number of internal neurons from each network. In general, Whitney's theorem dictates the number of measurements from one of the networks required to reconstruct each and every feature of the second network. The construction of the transformation function relies on a consistent, intrinsic representation of the network input space. We illustrate our algorithm by matching neural network pairs trained to learn (a) observations of scalar functions; (b) observations of twodimensional vector fields; and (c) representations of images of a moving threedimensional object (a rotating horse). The construction of such equivalence classes across different network instantiations clearly relates to transfer learning. We also expect that it will be valuable in establishing equivalence between different Machine Learningbased models of the same phenomenon observed through different instruments and by different research groups.
 Publication:

arXiv eprints
 Pub Date:
 July 2020
 arXiv:
 arXiv:2007.05646
 Bibcode:
 2020arXiv200705646B
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 14 pages, 10 figures