Controlling Model Complexity in Probabilistic Model-Based Dynamic Optimization of Neural Network Structures
A method of simultaneously optimizing both the structure of neural networks and the connection weights in a single training loop can reduce the enormous computational cost of neural architecture search. We focus on the probabilistic model-based dynamic neural network structure optimization that considers the probability distribution of structure parameters and simultaneously optimizes both the distribution parameters and connection weights based on gradient methods. Since the existing algorithm searches for the structures that only minimize the training loss, this method might find overly complicated structures. In this paper, we propose the introduction of a penalty term to control the model complexity of obtained structures. We formulate a penalty term using the number of weights or units and derive its analytical natural gradient. The proposed method minimizes the objective function injected the penalty term based on the stochastic gradient descent. We apply the proposed method in the unit selection of a fully-connected neural network and the connection selection of a convolutional neural network. The experimental results show that the proposed method can control model complexity while maintaining performance.
- Pub Date:
- July 2019
- Computer Science - Neural and Evolutionary Computing;
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- Accepted as a conference paper at the 28th International Conference on Artificial Neural Networks (ICANN 2019). The final authenticated publication will be available in the Springer Lecture Notes in Computer Science (LNCS). 13 pages