Information Relaxation and a Duality-Driven Algorithm for Stochastic Dynamic Programs
成果类型:
Article
署名作者:
Chen, Nan; Ma, Xiang; Liu, Yanchu; Yu, Wei
署名单位:
Chinese University of Hong Kong; Sun Yat Sen University
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2020.0464
发表日期:
2024
页码:
2302-2320
关键词:
markov decision-processes
American options
linear-programs
optimization
DECOMPOSITION
CONVERGENCE
approximation
simulation
valuation
policies
摘要:
We use the technique of information relaxation to develop a duality-driven iterative approach (DDP) to obtain and improve confidence interval estimates for the true value of finite-horizon stochastic dynamic programming problems. Each iteration of the algorithm solves an optimization-expectation procedure. We show that the sequence of dual value estimates yielded from the proposed approach monotonically converges to the true value function in a finite number of dual iterations. Aiming to overcome the curse of dimensionality in various applications, we also introduce a regression-based Monte Carlo algorithm for implementation. The new approach can assess the quality of heuristic policies and, more importantly, improve them if we find that their duality gap is large. We obtain the convergence rate of our Monte Carlo method in terms of the amounts of both basis functions and the sampled states. Finally, we demonstrate the effectiveness of our method using an optimal order execution problem with market friction. The experiments show that our method can significantly improve various heuristics commonly used in the literature to obtain new policies with a satisfactory performance guarantee. When we implement DDP in the numerical example, some local optimization routines are used in the optimization step. Inspired by the work of Brown and Smith [Brown DB, Smith JE (2014) Information relaxations, duality and convex stochastic dynamic programs. Oper. Res. 62:1394-1415.], we propose an ex-post method for smooth convex dynamic programs to assess how the local optimality of the inner optimization impacts the convergence of the DDP algorithm.