A globally convergent incremental Newton method
成果类型:
Article; Proceedings Paper
署名作者:
Guerbuzbalaban, M.; Ozdaglar, A.; Parrilo, P.
署名单位:
Massachusetts Institute of Technology (MIT)
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-015-0897-y
发表日期:
2015
页码:
283-313
关键词:
subgradient methods
gradient methods
摘要:
Motivated by machine learning problems over large data sets and distributed optimization over networks, we develop and analyze a new method called incremental Newton method for minimizing the sum of a large number of strongly convex functions. We show that our method is globally convergent for a variable stepsize rule. We further show that under a gradient growth condition, convergence rate is linear for both variable and constant stepsize rules. By means of an example, we show that without the gradient growth condition, incremental Newton method cannot achieve linear convergence. Our analysis can be extended to study other incremental methods: in particular, we obtain a linear convergence rate result for the incremental Gauss-Newton algorithm under a variable stepsize rule.