Risk-Sensitive Markov Decision Processes with Combined Metrics of Mean and Variance

成果类型:
Article
署名作者:
Xia, Li
署名单位:
Sun Yat Sen University
刊物名称:
PRODUCTION AND OPERATIONS MANAGEMENT
ISSN/ISSBN:
1059-1478
DOI:
10.1111/poms.13252
发表日期:
2020
页码:
2808-2827
关键词:
Markov decision process risk-sensitive mean and variance sensitivity-based optimization
摘要:
This study investigates the optimization problem of an infinite stage discrete time Markov decision process (MDP) with a long-run average metric considering both mean and variance of rewards together. Such performance metric is important since the mean indicates average returns and the variance indicates risk or fairness. However, the variance metric couples the rewards at all stages, the traditional dynamic programming is inapplicable as the principle of time consistency fails. We study this problem from a new perspective called the sensitivity-based optimization theory. A performance difference formula is derived and it can quantify the difference of the mean-variance combined metrics of MDPs under any two different policies. The difference formula can be utilized to generate new policies with strictly improved mean-variance performance. A necessary condition of the optimal policy and the optimality of deterministic policies are derived. We further develop an iterative algorithm with a form of policy iteration, which is proved to converge to local optima both in the mixed and randomized policy space. Specially, when the mean reward is constant in policies, the algorithm is guaranteed to converge to the global optimum. Finally, we apply our approach to study the fluctuation reduction of wind power in an energy storage system, which demonstrates the potential applicability of our optimization method.