CommunicationEfficient Federated Learning via Quantized Compressed Sensing
Abstract
In this paper, we present a communicationefficient federated learning framework inspired by quantized compressed sensing. The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server (PS). Our strategy for gradient compression is to sequentially perform block sparsification, dimensional reduction, and quantization. Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than onebit gradient compression. For accurate aggregation of the local gradients from the compressed signals at the PS, we put forth an approximate minimum mean square error (MMSE) approach for gradient reconstruction using the expectationmaximization generalizedapproximatemessagepassing (EMGAMP) algorithm. Assuming Bernoulli Gaussianmixture prior, this algorithm iteratively updates the posterior mean and variance of local gradients from the compressed signals. We also present a lowcomplexity approach for the gradient reconstruction. In this approach, we use the Bussgang theorem to aggregate local gradients from the compressed signals, then compute an approximate MMSE estimate of the aggregated gradient using the EMGAMP algorithm. We also provide a convergence rate analysis of the presented framework. Using the MNIST dataset, we demonstrate that the presented framework achieves almost identical performance with the case that performs no compression, while significantly reducing communication overhead for federated learning.
 Publication:

arXiv eprints
 Pub Date:
 November 2021
 arXiv:
 arXiv:2111.15071
 Bibcode:
 2021arXiv211115071O
 Keywords:

 Computer Science  Distributed;
 Parallel;
 and Cluster Computing;
 Computer Science  Artificial Intelligence;
 Electrical Engineering and Systems Science  Signal Processing