Distributed randomized gradient-free mirror descent algorithm for constrained optimization

Z Yu, DWC Ho, D Yuan - IEEE Transactions on Automatic …, 2021 - ieeexplore.ieee.org
Z Yu, DWC Ho, D Yuan
IEEE Transactions on Automatic Control, 2021ieeexplore.ieee.org
This article is concerned with the multiagent optimization problem. A distributed randomized
gradient-free mirror descent (DRGFMD) method is developed by introducing a randomized
gradient-free oracle in the mirror descent scheme where the non-Euclidean Bregman
divergence is used. The classical gradient descent method is generalized without using
subgradient information of objective functions. The proposed algorithms are the first
distributed non-Euclidean zeroth-order methods, which achieve an approximate-rate of …
This article is concerned with the multiagent optimization problem. A distributed randomized gradient-free mirror descent (DRGFMD) method is developed by introducing a randomized gradient-free oracle in the mirror descent scheme where the non-Euclidean Bregman divergence is used. The classical gradient descent method is generalized without using subgradient information of objective functions. The proposed algorithms are the first distributed non-Euclidean zeroth-order methods, which achieve an approximate -rate of convergence, recovering the best known optimal rate of distributed nonsmooth constrained convex optimization. Moreover, a decentralized reciprocal weighted averaging (RWA) approximating sequence is first investigated, the convergence for RWA sequence is shown to hold over time-varying graph. Rates of convergence are comprehensively explored for the algorithm with RWA (DRGFMD-RWA). The technique on constructing the decentralized RWA sequence provides new insight in searching for minimizers in distributed algorithms.
ieeexplore.ieee.org
Showing the best result for this search. See all results