Metric entropy of causal, discretetime LTI systems
Abstract
In [1] it is shown that recurrent neural networks (RNNs) can learn  in a metric entropy optimal manner  discrete time, linear timeinvariant (LTI) systems. This is effected by comparing the number of bits needed to encode the approximating RNN to the metric entropy of the class of LTI systems under consideration [2, 3]. The purpose of this note is to provide an elementary selfcontained proof of the metric entropy results in [2, 3], in the process of which minor mathematical issues appearing in [2, 3] are cleaned up. These corrections also lead to the correction of a constant in a result in [1] (see Remark 2.5).
 Publication:

arXiv eprints
 Pub Date:
 November 2022
 DOI:
 10.48550/arXiv.2211.15466
 arXiv:
 arXiv:2211.15466
 Bibcode:
 2022arXiv221115466H
 Keywords:

 Mathematics  Dynamical Systems;
 Computer Science  Information Theory
 EPrint:
 [1] arXiv:2105.02556