On the Communication Latency of Wireless Decentralized Learning
Abstract
We consider a wireless network comprising $n$ nodes located within a circular area of radius $R$, which are participating in a decentralized learning algorithm to optimize a global objective function using their local datasets. To enable gradient exchanges across the network, we assume each node communicates only with a set of neighboring nodes, which are within a distance $R n^{\beta}$ of itself, where $\beta\in(0,\frac{1}{2})$. We use tools from network information theory and random geometric graph theory to show that the communication delay for a single round of exchanging gradients on all the links throughout the network scales as $\mathcal{O}\left(\frac{n^{23\beta}}{\beta\log n}\right)$, increasing (at different rates) with both the number of nodes and the gradient exchange threshold distance.
 Publication:

arXiv eprints
 Pub Date:
 February 2020
 arXiv:
 arXiv:2002.04069
 Bibcode:
 2020arXiv200204069N
 Keywords:

 Computer Science  Information Theory;
 Computer Science  Machine Learning;
 Statistics  Machine Learning
 EPrint:
 Submitted to the 2020 IEEE International Symposium on Information Theory (ISIT 2020)