Theoretical Analysis of SelfTraining with Deep Networks on Unlabeled Data
Abstract
Selftraining algorithms, which train a model to fit pseudolabels predicted by another previouslylearned model, have been very successful for learning with unlabeled data using neural networks. However, the current theoretical understanding of selftraining only applies to linear models. This work provides a unified theoretical analysis of selftraining with deep networks for semisupervised learning, unsupervised domain adaptation, and unsupervised learning. At the core of our analysis is a simple but realistic "expansion" assumption, which states that a low probability subset of the data must expand to a neighborhood with large probability relative to the subset. We also assume that neighborhoods of examples in different classes have minimal overlap. We prove that under these assumptions, the minimizers of population objectives based on selftraining and inputconsistency regularization will achieve high accuracy with respect to groundtruth labels. By using offtheshelf generalization bounds, we immediately convert this result to sample complexity guarantees for neural nets that are polynomial in the margin and Lipschitzness. Our results help explain the empirical successes of recently proposed selftraining algorithms which use input consistency regularization.
 Publication:

arXiv eprints
 Pub Date:
 October 2020
 DOI:
 10.48550/arXiv.2010.03622
 arXiv:
 arXiv:2010.03622
 Bibcode:
 2020arXiv201003622W
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 Published at ICLR 2021