Tight TimeSpace Lower Bounds for ConstantPass Learning
Abstract
In his breakthrough paper, Raz showed that any parity learning algorithm requires either quadratic memory or an exponential number of samples [FOCS'16, JACM'19]. A line of work that followed extended this result to a large class of learning problems. Until recently, all these results considered learning in the streaming model, where each sample is drawn independently, and the learner is allowed a single pass over the stream of samples. Garg, Raz, and Tal [CCC'19] considered a stronger model, allowing multiple passes over the stream. In the $2$pass model, they showed that learning parities of size $n$ requires either a memory of size $n^{1.5}$ or at least $2^{\sqrt{n}}$ samples. (Their result also generalizes to other learning problems.) In this work, for any constant $q$, we prove tight memorysample lower bounds for any parity learning algorithm that makes $q$ passes over the stream of samples. We show that such a learner requires either $\Omega(n^{2})$ memory size or at least $2^{\Omega(n)}$ samples. Beyond establishing a tight lower bound, this is the first nontrivial lower bound for $q$pass learning for any $q\ge 3$. Similar to prior work, our results extend to any learning problem with many nearlyorthogonal concepts. We complement the lower bound with an upper bound, showing that parity learning with $q$ passes can be done efficiently with $O(n^2/\log q)$ memory.
 Publication:

arXiv eprints
 Pub Date:
 October 2023
 DOI:
 10.48550/arXiv.2310.08070
 arXiv:
 arXiv:2310.08070
 Bibcode:
 2023arXiv231008070L
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Computational Complexity
 EPrint:
 To appear at FOCS 2023