An adaptive augmented Lagrangian method for large-scale constrained optimization
成果类型:
Article
署名作者:
Curtis, Frank E.; Jiang, Hao; Robinson, Daniel P.
署名单位:
Lehigh University; Johns Hopkins University
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-014-0784-y
发表日期:
2015
页码:
201-245
关键词:
global convergence theory
stabilized sqp method
trust regions
Penalty
algorithms
摘要:
We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its ability to be implemented matrix-free. This is important as this feature of augmented Lagrangian methods is responsible for renewed interests in employing such methods for solving large-scale problems. We provide convergence results from remote starting points and illustrate by a set of numerical experiments that our method outperforms traditional augmented Lagrangian methods in terms of critical performance measures.