Distributed optimization algorithm for multi-agent networks with lazy gradient information
Abstract
Based on the so-called lazy gradient information, this note proposes two communication-reduced distributed optimization algorithms over undirected multi-agent networks. The lazy gradients refer to some gradients that do not change much in the past iterations and thus may not be distributed among agents which correspondingly reduces the communication load in the networks. For both the deterministic and the stochastic frameworks, the asymptotic properties of the distributed optimization algorithms are established. Compared with the existing literature using the lazy gradient information, the proposed algorithms in the paper are fully distributed and more suitable for the situation of decentralized multi-agent networks. The effectiveness of the proposed algorithms is also testified through numerical simulations.