Recent advances in trust region algorithms
成果类型:
Article; Proceedings Paper
署名作者:
Yuan, Ya-xiang
署名单位:
Chinese Academy of Sciences; Academy of Mathematics & System Sciences, CAS
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-015-0893-2
发表日期:
2015
页码:
249-281
关键词:
derivative-free optimization
model-based algorithms
unconstrained optimization
superlinear convergence
nonlinear optimization
GLOBAL CONVERGENCE
quadratic models
Newton method
optimality conditions
cubic regularization
摘要:
Trust region methods are a class of numerical methods for optimization. Unlike line search type methods where a line search is carried out in each iteration, trust region methods compute a trial step by solving a trust region subproblem where a model function is minimized within a trust region. Due to the trust region constraint, nonconvex models can be used in trust region subproblems, and trust region algorithms can be applied to nonconvex and ill-conditioned problems. Normally it is easier to establish the global convergence of a trust region algorithm than that of its line search counterpart. In the paper, we review recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives. Results on trust region subproblems and regularization methods are also discussed.