menu_book Explore the article's raw data

Distributed Online Adaptive Gradient Descent With Event-Triggered Communication

Abstract

This article proposes an event-triggered adaptive gradient descent method for distributed online optimization with a constraint set. In the proposed method, a group of agents cooperatively minimizes a dynamic regret, which is a cumulative loss for estimations of agents against an optimal strategy. The adaptive learning rate of the online algorithm is adjusted using the second moment of the gradient. We consider an event-triggered approach to reduce unnecessary communication. The local communication between agents is performed when the error between the last triggered estimation and the current estimation exceeds a trigger threshold. We show that the proposed algorithm achieves a sublinear regret bound if a path variation of the dynamic optimal strategy is sufficiently small. We also show that the convergence performance is comparable to the time-triggered case while the number of communications is effectively reduced by appropriately setting the threshold for the event-triggered communication.

article Article
date_range 2024
language English
link Link of the paper
format_quote
Sorry! There is no raw data available for this article.
Loading references...
Loading citations...
Featured Keywords

Optimization
Heuristic algorithms
Cost function
Estimation
Symmetric matrices
Network systems
Multi-agent systems
Cooperative control
distributed optimization
multiagent network
Citations by Year

Share Your Research Data, Enhance Academic Impact