A Training-Based Mutual Information Lower Bound for Large-Scale Systems
Abstract
We provide a mutual information lower bound that can be used to analyze the effect of training in models with unknown parameters. For large-scale systems, we show that this bound can be calculated using the difference between two derivatives of a conditional entropy function. The bound does not require explicit estimation of the unknown parameters. We provide a step-by-step process for computing the bound, and provide an example application. A comparison with known classical mutual information bounds is provided.
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2021
- DOI:
- 10.48550/arXiv.2108.00034
- arXiv:
- arXiv:2108.00034
- Bibcode:
- 2021arXiv210800034M
- Keywords:
-
- Computer Science - Information Theory
- E-Print:
- This work has been submitted to the IEEE for possible publication. arXiv admin note: substantial text overlap with arXiv:2012.00970