A linearly convergent doubly stochastic Gauss-Seidel algorithm for solving linear equations and a certain class of over-parameterized optimization problems
成果类型:
Article; Proceedings Paper
署名作者:
Razaviyayn, Meisam; Hong, Mingyi; Reyhanian, Navid; Luo, Zhi-Quan
署名单位:
University of Southern California; University of Minnesota System; University of Minnesota Twin Cities; Shenzhen Research Institute of Big Data; The Chinese University of Hong Kong, Shenzhen
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-019-01404-0
发表日期:
2019
页码:
465-496
关键词:
摘要:
Consider the classical problem of solving a general linear system of equations Ax=b. It is well known that the (successively over relaxed) Gauss-Seidel scheme and many of its variants may not converge when A is neither diagonally dominant nor symmetric positive definite. Can we have a linearly convergent G-S type algorithm that works for anyA? In this paper we answer this question affirmatively by proposing a doubly stochastic G-S algorithm that is provably linearly convergent (in the mean square error sense) for any feasible linear system of equations. The key in the algorithm design is to introduce a nonuniform double stochastic scheme for picking the equation and the variable in each update step as well as a stepsize rule. These techniques also generalize to certain iterative alternating projection algorithms for solving the linear feasibility problem Axb with an arbitrary A, as well as high-dimensional minimization problems for training over-parameterized models in machine learning. Our results demonstrate that a carefully designed randomization scheme can make an otherwise divergent G-S algorithm converge.