Variance-Reduced Accelerated First-Order Methods: Central Limit Theorems and Confidence Statements
成果类型:
Article
署名作者:
Lei, Jinlong; Shanbhag, Uday V.
署名单位:
Tongji University; Tongji University; Pennsylvania Commonwealth System of Higher Education (PCSHE); Pennsylvania State University; Pennsylvania State University - University Park
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2021.0068
发表日期:
2025
关键词:
stochastic-approximation
Asymptotic Normality
gradient
optimization
parameters
摘要:
In this paper, we consider a strongly convex stochastic optimization problem and propose three classes of variable sample -size stochastic first -order methods: (i) the standard stochastic gradient descent method, (ii) its accelerated variant, and (iii) the stochastic heavy -ball method. In each scheme, the exact gradients are approximated by averaging across an increasing batch size of sampled gradients. We prove that when the sample size increases at a geometric rate, the generated estimates converge in mean to the optimal solution at an analogous geometric rate for schemes (i)-(iii). Based on this result, we provide central limit statements, whereby it is shown that the rescaled estimation errors converge in distribution to a normal distribution with the associated covariance matrix dependent on the Hessian matrix, the covariance of the gradient noise, and the step length. If the sample size increases at a polynomial rate, we show that the estimation errors decay at a corresponding polynomial rate and establish the associated central limit theorems (CLTs). Under certain conditions, we discuss how both the algorithms and the associated limit theorems may be extended to constrained and nonsmooth regimes. Finally, we provide an avenue to construct confidence regions for the optimal solution based on the established CLTs and test the theoretical findings on a stochastic parameter estimation problem.
来源URL: