Dynamic Attention-based Communication-Efficient Federated Learning
Abstract
Federated learning (FL) offers a solution to train a global machine learning model while still maintaining data privacy, without needing access to data stored locally at the clients. However, FL suffers performance degradation when client data distribution is non-IID, and a longer training duration to combat this degradation may not necessarily be feasible due to communication limitations. To address this challenge, we propose a new adaptive training algorithm $\texttt{AdaFL}$, which comprises two components: (i) an attention-based client selection mechanism for a fairer training scheme among the clients; and (ii) a dynamic fraction method to balance the trade-off between performance stability and communication efficiency. Experimental results show that our $\texttt{AdaFL}$ algorithm outperforms the usual $\texttt{FedAvg}$ algorithm, and can be incorporated to further improve various state-of-the-art FL algorithms, with respect to three aspects: model accuracy, performance stability, and communication efficiency.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2021
- DOI:
- 10.48550/arXiv.2108.05765
- arXiv:
- arXiv:2108.05765
- Bibcode:
- 2021arXiv210805765C
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Distributed;
- Parallel;
- and Cluster Computing;
- I.2
- E-Print:
- 7 pages, 3 figures, presented at the International Workshop on Federated and Transfer Learning for Data Sparsity and Confidentiality (FTL-IJCAI 2021) in conjunction with the 30th International Joint Conference on Artificial Intelligence (IJCAI), 2021