Distributed Online Adaptive Gradient Descent With Event-Triggered Communication
Abstract
This article proposes an event-triggered adaptive gradient descent method for distributed online optimization with a constraint set. In the proposed method, a group of agents cooperatively minimizes a dynamic regret, which is a cumulative loss for estimations of agents against an optimal strategy. The adaptive learning rate of the online algorithm is adjusted using the second moment of the gradient. We consider an event-triggered approach to reduce unnecessary communication. The local communication between agents is performed when the error between the last triggered estimation and the current estimation exceeds a trigger threshold. We show that the proposed algorithm achieves a sublinear regret bound if a path variation of the dynamic optimal strategy is sufficiently small. We also show that the convergence performance is comparable to the time-triggered case while the number of communications is effectively reduced by appropriately setting the threshold for the event-triggered communication.