Generalization and Overfitting in Matrix Product State Machine Learning Architectures
Abstract
While overfitting and, more generally, double descent are ubiquitous in machine learning, increasing the number of parameters of the most widely used tensor network, the matrix product state (MPS), has generally lead to monotonic improvement of test performance in previous studies. To better understand the generalization properties of architectures parameterized by MPS, we construct artificial data which can be exactly modeled by an MPS and train the models with different number of parameters. We observe model overfitting for onedimensional data, but also find that for more complex data overfitting is less significant, while with MNIST image data we do not find any signatures of overfitting. We speculate that generalization properties of MPS depend on the properties of data: with onedimensional data (for which the MPS ansatz is the most suitable) MPS is prone to overfitting, while with more complex data which cannot be fit by MPS exactly, overfitting may be much less significant.
 Publication:

arXiv eprints
 Pub Date:
 August 2022
 arXiv:
 arXiv:2208.04372
 Bibcode:
 2022arXiv220804372S
 Keywords:

 Computer Science  Machine Learning;
 Quantum Physics
 EPrint:
 Main text: 8 pages and 6 figures. Supplementary material: 3 pages, 3 figures