Distributed Approximation of Functions over Fast Fading Channels with Applications to Distributed Learning and the MaxConsensus Problem
Abstract
In this work, we consider the problem of distributed approximation of functions over multipleaccess channels with additive noise. In contrast to previous works, we take fast fading into account and give explicit probability bounds for the approximation error allowing us to derive bounds on the number of channel uses that are needed to approximate a function up to a given approximation accuracy. Neither the fading nor the noise process is limited to Gaussian distributions. Instead, we consider subgaussian random variables which include Gaussian as well as many other distributions of practical relevance. The results are motivated by and have immediate applications to a) computing predictors in models for distributed machine learning and b) the maxconsensus problem in ultradense networks.
 Publication:

arXiv eprints
 Pub Date:
 July 2019
 arXiv:
 arXiv:1907.03777
 Bibcode:
 2019arXiv190703777B
 Keywords:

 Computer Science  Information Theory
 EPrint:
 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, September 2427, 2019, pp. 11461153