Generalization Bounds for Domain Adaptation
Abstract
In this paper, we provide a new framework to obtain the generalization bounds of the learning process for domain adaptation, and then apply the derived bounds to analyze the asymptotical convergence of the learning process. Without loss of generality, we consider two kinds of representative domain adaptation: one is with multiple sources and the other is combining source and target data. In particular, we use the integral probability metric to measure the difference between two domains. For either kind of domain adaptation, we develop a related Hoeffding-type deviation inequality and a symmetrization inequality to achieve the corresponding generalization bound based on the uniform entropy number. We also generalized the classical McDiarmid's inequality to a more general setting where independent random variables can take values from different domains. By using this inequality, we then obtain generalization bounds based on the Rademacher complexity. Afterwards, we analyze the asymptotic convergence and the rate of convergence of the learning process for such kind of domain adaptation. Meanwhile, we discuss the factors that affect the asymptotic behavior of the learning process and the numerical experiments support our theoretical findings as well.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2013
- DOI:
- 10.48550/arXiv.1304.1574
- arXiv:
- arXiv:1304.1574
- Bibcode:
- 2013arXiv1304.1574Z
- Keywords:
-
- Computer Science - Machine Learning;
- Mathematics - Probability