A Stochastic Operator Framework for Optimization and Learning With Sub-Weibull Errors

成果类型:
Article
署名作者:
Bastianello, Nicola; Madden, Liam; Carli, Ruggero; Dall'Anese, Emiliano
署名单位:
Royal Institute of Technology; University of British Columbia; University of Padua; University of Colorado System; University of Colorado Boulder; University of Colorado System; University of Colorado Boulder
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3419186
发表日期:
2024
页码:
8722-8737
关键词:
STOCHASTIC PROCESSES CONVERGENCE tail optimization Additives Random variables data models Federated Learning high probability convergence inexact optimization online optimization stochastic operators
摘要:
This article proposes a framework to study the convergence of stochastic optimization and learning algorithms. The framework is modeled over the different challenges that these algorithms pose, such as 1) the presence of random additive errors (e.g., due to stochastic gradients), and 2) random coordinate updates (e.g., due to asynchrony in distributed set-ups). The article covers both convex and strongly convex problems, and it also analyzes online scenarios, involving changes in the data and costs. This article relies on interpreting stochastic algorithms as the iterated application of stochastic operators, thus allowing us to use the powerful tools of operator theory. In particular, we consider operators characterized by additive errors with sub-Weibull distribution (which parameterize a broad class of errors by their tail probability), and random updates. In this framework, we derive convergence results in mean and high probability, by providing bounds to the distance of the current iteration from a solution of the optimization or learning problem. The contributions are discussed in light of federated learning applications.
来源URL: