Distributed Variable Samplesize Stochastic Optimization with Fixed Stepsizes
Abstract
The paper considers distributed stochastic optimization over randomly switching networks, where agents collaboratively minimize the average of all agents' local expectationvalued convex cost functions. Due to the stochasticity in gradient observations, distributedness of local functions, and randomness of communication topologies, distributed algorithms with a convergence guarantee under fixed stepsizes have not been achieved yet. This work incorporates variance reduction scheme into the distributed stochastic gradient tracking algorithm, where local gradients are estimated by averaging across a variable number of sampled gradients. With an identically and independently distributed (i.i.d.) random network, we show that all agents' iterates converge almost surely to the same optimal solution under fixed stepsizes. When the global cost function is strongly convex and the sample size increases at a geometric rate, we prove that the iterates geometrically converge to the unique optimal solution, and establish the iteration, oracle, and communication complexity. The algorithm performance including rate and complexity analysis are further investigated with constant stepsizes and a polynomially increasing sample size. Finally, the empirical algorithm performance are illustrated with numerical examples.
 Publication:

arXiv eprints
 Pub Date:
 August 2021
 arXiv:
 arXiv:2108.05078
 Bibcode:
 2021arXiv210805078L
 Keywords:

 Mathematics  Optimization and Control