Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning
Abstract
In this paper, we propose decentralized and scalable algorithms for Gaussian process (GP) training and prediction in multi-agent systems. To decentralize the implementation of GP training optimization algorithms, we employ the alternating direction method of multipliers (ADMM). A closed-form solution of the decentralized proximal ADMM is provided for the case of GP hyper-parameter training with maximum likelihood estimation. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. In addition, we propose a covariance-based nearest neighbor selection strategy that enables a subset of agents to perform predictions. The efficacy of the proposed methods is illustrated with numerical experiments on synthetic and real data.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2022
- DOI:
- arXiv:
- arXiv:2203.02865
- Bibcode:
- 2022arXiv220302865K
- Keywords:
-
- Statistics - Machine Learning;
- Computer Science - Machine Learning;
- Computer Science - Multiagent Systems;
- Computer Science - Robotics;
- Mathematics - Optimization and Control