An optimal randomized incremental gradient method

成果类型:
Article
署名作者:
Lan, Guanghui; Zhou, Yi
署名单位:
University System of Georgia; Georgia Institute of Technology
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-017-1173-0
发表日期:
2018
页码:
167-215
关键词:
stochastic-approximation algorithms Composite optimization convex minimization
摘要:
In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the average of m (>= 1) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a primal-dual termination criterion. Our major contribution is to develop a randomized primal-dual gradient (RPDG) method, which needs to compute the gradient of only one randomly selected smooth component at each iteration, but can possibly achieve better complexity than PDG in terms of the total number of gradient evaluations. More specifically, we show that the total number of gradient evaluations performed by RPDG can be O (root m) times smaller, both in expectation and with high probability, than those performed by deterministic optimal first-order methods under favorable situations. We also show that the complexity of the RPDG method is not improvable by developing a new lower complexity bound for a general class of randomized methods for solving large-scale finite-sum convex optimization problems.
来源URL: