An inexact successive quadratic approximation method for L-1 regularized optimization
成果类型:
Article
署名作者:
Byrd, Richard H.; Nocedal, Jorge; Oztoprak, Figen
署名单位:
University of Colorado System; University of Colorado Boulder; Northwestern University; Istanbul Bilgi University
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-015-0941-y
发表日期:
2016
页码:
375-396
关键词:
newton
selection
摘要:
We study a Newton-like method for the minimization of an objective function that is the sum of a smooth function and an regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model of the objective function . In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee global convergence and that can be used to control the local rate of convergence of the iteration. Our inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. We give careful consideration to the algorithm employed for the inner minimization, and report numerical results on two test sets originating in machine learning.