Balanced One-shot Neural Architecture Optimization
Abstract
The ability to rank candidate architectures is the key to the performance of neural architecture search~(NAS). One-shot NAS is proposed to reduce the expense but shows inferior performance against conventional NAS and is not adequately stable. We investigate into this and find that the ranking correlation between architectures under one-shot training and the ones under stand-alone full training is poor, which misleads the algorithm to discover better architectures. Further, we show that the training of architectures of different sizes under the current one-shot method is imbalanced, which causes the evaluated performances of the architectures to be less predictable of their ground-truth performances and affects the ranking correlation heavily. Consequently, we propose Balanced NAO where we introduce balanced training of the supernet during the search procedure to encourage more updates for large architectures than small architectures by sampling architectures in proportion to their model sizes. Comprehensive experiments verify that our proposed method is effective and robust which leads to a more stable search. The final discovered architecture shows significant improvements against baselines with a test error rate of 2.60\% on CIFAR-10 and top-1 accuracy of 74.4% on ImageNet under the mobile setting. Code and model checkpoints will be publicly available. The code is available at github.com/renqianluo/NAO_pytorch.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2019
- DOI:
- 10.48550/arXiv.1909.10815
- arXiv:
- arXiv:1909.10815
- Bibcode:
- 2019arXiv190910815L
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Neural and Evolutionary Computing;
- Statistics - Machine Learning
- E-Print:
- Code and model checkpoints are publicly available at https://github.com/renqianluo/NAO_pytorch