General Error Estimates for the Longstaff-Schwartz Least-Squares Monte Carlo Algorithm

成果类型:
Article
署名作者:
Zanger, Daniel Z.
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2019.1017
发表日期:
2020
页码:
923-946
关键词:
AMERICAN OPTIONS CONVERGENCE
摘要:
We establish error estimates for the Longstaff-Schwartz algorithm, employing just a single set of independent Monte Carlo sample paths that is reused for all exercise time steps. We obtain, within the context of financial derivative payoff functions bounded according to the uniform norm, new bounds on the stochastic part of the error of this algorithm for an approximation architecture that may be any arbitrary set of L-2 functions of finite Vapnik-Chervonenkis (VC) dimension whenever the algorithm's least-squares regression optimization step is solved either exactly or approximately. Moreover, we show how to extend these estimates to the case of payoff functions bounded only in L-p, p a real number greater than 2 < p < infinity. We also establish new overall error bounds for the Longstaff-Schwartz algorithm, including estimates on the approximation error also for unconstrained linear, finite-dimensional polynomial approximation. Our results here extend those in the literature by not imposing any uniform boundedness condition on the approximation architectures, allowing each of them to be any set of L-2 functions of finite VC dimension and by establishing error estimates as well in the case of epsilon-additive approximate least-squares optimization, epsilon greater than or equal to 0.